Don’t trust everything online! The CRAPP test on the Evaluating Information and Fake News Library Guide is a good place to start.
Google’s spiders, or web crawlers, index content on the web, a bit like the index in the back of a book. They can only do this for sites that are publicly accessible, not everything is in Google.
When you enter a search, Google’s algorithms then work to find the best match for your result. Factors taken into consideration to rank search results include:
Search engines, such as Google, do not search the Web directly. Instead, they search databases of webpages that have been harvested from the Internet by computer programs known as robots or spiders. These spiders periodically crawl the Web and index the text, links and other data in each webpage. This information is stored in the search engine’s database, which is queried when a search is performed.
The spiders do not capture everything on the Web, notably:
This information is generally not included in search results as it is generally inaccessible to search engine spiders. It is therefore known as the invisible, hidden or deep web.
It is recommended to use a variety of sources when looking for information, including a search engine other than Google, as well as UWA Library resources. These are accessible through OneSearch and the databases.
Another useful tool is Google Scholar, a search engine which searches across scholarly literature such as journal articles, conference proceedings, theses and government reports. See the Google Scholar page for more details and search techniques.