The content within those reports is the same, the only difference is the download format. CSV (comma separated values) can be imported into any number of spreadsheet applications. The XLSX is designed to open in Excel.
These reports currently contain similar information with the View Details report contains the additional information from the Indexation Checks that occur 4 – 5 days after submission.
If all submitted URLs were requested to run in a single batch vs drip feeding over a period of days, it means that when the automated indexation checks were processed that 18 of your URLs are indexed AND served meaning they are findable in the search results.
That is a good idea. You can download either the View Details report and then copy and paste the URLS that did not test as indexed and resubmit as a new project, one credit per URL.
The Automated Indexation Checks would run again on just those URLs 4 – 5 days after the last URL was processed.
For Agency Level Tier and above, there is a feature called Smart URL Resubmission. This feature would automatically identify the URLs that are not indexed and automatically resubmit them, one credit per URL.
The Automated Indexation Checks would run again on just those URLs 4 – 5 days after the last URL was processed.
There are many reasons.
Assuming that the URLs submitted come from sites that do not block Googlebot or Bingbot and the html and javascript are properly initiated, the addition of content into Google’s index is solely at their discretion. IndexZilla controls the processing of URLs only so far as to know that they have been offered up to Google and crawled.
Google clearly has criteria concerning which content is accepted into their index but they may also choose to not serve said content. Another way to describe serving is to make content findable in the results.
In this case, even if it is in Google’s index it would not display as indexed after the automated indexation checks.
Hence, indexed but not served.
In the indexation testing data that spans over 3 and half years so far, it is common for content to be and indexed but not served. Often times there are activities Google engages in that delay or stop the serving of indexed content. Previously we had a cache that could confirm this condition which was useful when no Search Console data is accessible or possible.
Assuming there are no blocks on the site to prevent crawlers from the page, there are a number of reasons why Google does not serve content into their searchable index despite having crawled it.
You are encouraged to check your URLs, especially those that are on other people’s sites that you do not control. One way is to use the Rich Snippet Testing Tool.
If you would like us to look into your particular situation reach out to [email protected]
In Carolyn’s indexation research, there are 4 common reasons where content can not receive welcome into Google’s index.
- the bots are blocked
- the word count is very low
- the structure of the content
- there is already topical content on the site that answers the same query – this gets into her signature work on a Helpful Content Analysis available on Gumroad – Decoding Google’s Helpful Content System
Send your questions to [email protected] – we answer all customer questions within less than a business day during work days Monday – Friday.