Search engine optimization

Search engine optimization (SEO) is the process of increasing the quality and quantity of website traffic[1], increasing visibility of a website or a web page to users of a web search engine.
SEO refers to the improvement of unpaid results (known as "natural" or "organic" results), and excludes the purchase of paid placement.

SEO may target different kinds of search, including image search, video search, academic search,[2] news search, and industry-specific vertical search engines.

Optimizing a website may involve editing its content, adding content, modifying HTML, and associated coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines. Promoting a site to increase the number of backlinks, or inbound links, is another SEO tactic. By May 2015, mobile search had surpassed desktop search.[3]

As an Internet marketing strategy, SEO considers how search engines work, the computer programmed algorithms which dictate search engine behavior, what people search for, the actual search terms or keywords typed into search engines, and which search engines are preferred by their targeted audience. SEO is performed because a website will receive more visitors from a search engine the higher the website ranks in the search engine results page (SERP). These visitors can then be converted into customers.[4]

SEO differs from local search engine optimization in that the latter is focused on optimizing a business' online presence so that its web pages will be displayed by search engines when a user enters a local search for its products or services. The former instead is more focused on national or international searches.

History

Webmasters and content providers began optimizing websites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all webmasters only needed to submit the address of a page, or URL, to the various engines which would send a "spider" to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed.[5] The process involves a search engine spider downloading a page and storing it on the search engine's own server. A second program, known as an indexer, extracts information about the page, such as the words it contains, where they are located, and any weight for specific words, as well as all links the page contains. All of this information is then placed into a scheduler for crawling at a later date.

Website owners recognized the value of a high ranking and visibility in search engine results,[6] creating an opportunity for both white hat and black hat SEO practitioners. According to industry analyst Danny Sullivan, the phrase "search engine optimization" probably came into use in 1997. Sullivan credits Bruce Clay as one of the first people to popularize the term.[7] On May 2, 2007,[8] Jason Gambert attempted to trademark the term SEO by convincing the Trademark Office in Arizona[9] that SEO is a "process" involving manipulation of keywords and not a "marketing service."

Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag or index files in engines like ALIWEB. Meta tags provide a guide to each page's content. Using metadata to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.[10] Web content providers also manipulated some attributes within the HTML source of a page in an attempt to rank well in search engines.[11] By 1997, search engine designers recognized that webmasters were making efforts to rank well in their search engine, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such as Altavista and Infoseek, adjusted their algorithms to prevent webmasters from manipulating rankings.[12]

By relying so much on factors such as keyword density which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. This meant moving away from heavy reliance on term density to a more holistic process for scoring semantic signals.[13] Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, poor quality or irrelevant search results could lead users to find other search sources. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate. In 2005, an annual conference, AIRWeb, Adversarial Information Retrieval on the Web was created to bring together practitioners and researchers concerned with search engine optimization and related topics.[14]

Companies that employ overly aggressive techniques can get their client websites banned from the search results. In 2005, the Wall Street Journal reported on a company, Traffic Power, which allegedly used high-risk techniques and failed to disclose those risks to its clients.[15] Wired magazine reported that the same company sued blogger and SEO Aaron Wall for writing about the ban.[16] Google's Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.[17]

Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences, webchats, and seminars. Major search engines provide information and guidelines to help with website optimization.[18][19] Google has a Sitemaps program to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website.[20] Bing Webmaster Tools provides a way for webmasters to submit a sitemap and web feeds, allows users to determine the "crawl rate", and track the web pages index status.

In 2015, it was reported that Google was developing and promoting mobile search as a key feature within future products. In response, many brands begun to take a different approach to their Internet marketing strategies.[21]

Relationship with Google

In 1998, two graduate students at Stanford University, Larry Page and Sergey Brin, developed "Backrub", a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.[22] PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random web surfer.

Page and Brin founded Google in 1998.[23] Google attracted a loyal following among the growing number of Internet users, who liked its simple design.[24] Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.[25]

By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation. In June 2007, The New York Times' Saul Hansell stated Google ranks sites using more than 200 different signals.[26] The leading search engines, Google, Bing, and Yahoo, do not disclose the algorithms they use to rank pages. Some SEO practitioners have studied different approaches to search engine optimization, and have shared their personal opinions.[27] Patents related to search engines can provide information to better understand search engines.[28] In 2005, Google began personalizing search results for each user. Depending on their history of previous searches, Google crafted results for logged in users.[29]

In 2007, Google announced a campaign against paid links that transfer PageRank.[30] On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links. Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat nofollowed links in the same way, to prevent SEO service providers from using nofollow for PageRank sculpting.[31] As a result of this change the usage of nofollow led to evaporation of PageRank. In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated Javascript and thus permit PageRank sculpting. Additionally several solutions have been suggested that include the usage of iframes, Flash and Javascript.[32]

In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.[33] On June 8, 2010 a new web indexing system called Google Caffeine was announced. Designed to allow users to find news results, forum posts and other content much sooner after publishing than before, Google caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before. According to Carrie Grimes, the software engineer who announced Caffeine for Google, "Caffeine provides 50 percent fresher results for web searches than our last index..."[34] Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant. Historically site administrators have spent months or even years optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.[35]

In February 2011, Google announced the Panda update, which penalizes websites containing content duplicated from other websites and sources. Historically websites have copied content from one another and benefited in search engine rankings by engaging in this practice. However Google implemented a new system which punishes sites whose content is not unique.[36] The 2012 Google Penguin attempted to penalize websites that used manipulative techniques to improve their rankings on the search engine.[37] Although Google Penguin has been presented as an algorithm aimed at fighting web spam, it really focuses on spammy links[38] by gauging the quality of the sites the links are coming from. The 2013 Google Hummingbird update featured an algorithm change designed to improve Google's natural language processing and semantic understanding of web pages. Hummingbird's language processing system falls under the newly recognised term of 'Conversational Search' where the system pays more attention to each word in the query in order to better match the pages to the meaning of the query rather than a few words [39]. With regards to the changes made to search engine optimization, for content publishers and writers, Hummingbird is intended to resolve issues by getting rid of irrelevant content and spam, allowing Google to produce high-quality content and rely on them to be 'trusted' authors.

Methods

Getting indexed

PageRanks-Example
Search engines use complex mathematical algorithms to interpret which websites a user seeks. In this diagram, if each bubble represents a website, programs sometimes called spiders examine which sites link to which other sites, with arrows representing these links. Websites getting more inbound links, or stronger links, are presumed to be more important and what the user is searching for. In this example, since website B is the recipient of numerous inbound links, it ranks more highly in a web search. And the links "carry through", such that website C, even though it only has one inbound link, has an inbound link from a highly popular site (B) while site E does not. Note: Percentages are rounded.

The leading search engines, such as Google, Bing and Yahoo!, use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically. The Yahoo! Directory and DMOZ, two major directories which closed in 2014 and 2017 respectively, both required manual submission and human editorial review.[40] Google offers Google Search Console, for which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found, especially pages that are not discoverable by automatically following links[41] in addition to their URL submission console.[42] Yahoo! formerly operated a paid submission service that guaranteed crawling for a cost per click;[43] however, this practice was discontinued in 2009.

Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by the search engines. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.[44]

Today, most people are searching on Google using a mobile device.[45] In November 2016, Google announced a major change to the way crawling websites and started to make their index mobile-first, which means the mobile version of your website becomes the starting point for what Google includes in their index.[46]

Preventing crawling

To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots (usually <meta name="robots" content="noindex"> ). When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[47]

Increasing prominence

A variety of methods can increase the prominence of a webpage within the search results. Cross linking between pages of the same website to provide more links to important pages may improve its visibility.[48] Writing content that includes frequently searched keyword phrase, so as to be relevant to a wide variety of search queries will tend to increase traffic.[48] Updating content so as to keep search engines crawling back frequently can give additional weight to a site. Adding relevant keywords to a web page's meta data, including the title tag and meta description, will tend to improve the relevancy of a site's search listings, thus increasing traffic. URL canonicalization of web pages accessible via multiple URLs, using the canonical link element[49] or via 301 redirects can help make sure links to different versions of the URL all count towards the page's link popularity score.

White hat versus black hat techniques

SEO techniques can be classified into two broad categories: techniques that search engine companies recommend as part of good design ("white hat"), and those techniques of which search engines do not approve ("black hat"). The search engines attempt to minimize the effect of the latter, among them spamdexing. Industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO.[50] White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.[51]

An SEO technique is considered white hat if it conforms to the search engines' guidelines and involves no deception. As the search engine guidelines[18][19][52] are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see. White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the online "spider" algorithms, rather than attempting to trick the algorithm from its intended purpose. White hat SEO is in many ways similar to web development that promotes accessibility,[53] although the two are not identical.

Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception. One black hat technique uses hidden text, either as text colored similar to the background, in an invisible div, or positioned off screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking. Another category sometimes used is grey hat SEO. This is in between black hat and white hat approaches, where the methods employed avoid the site being penalized but do not act in producing the best content for users. Grey hat SEO is entirely focused on improving search engine rankings.

Search engines may penalize sites they discover using black hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines' algorithms, or by a manual site review. One example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices.[54] Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google's search engine results page.[55]

As marketing strategy

SEO is not an appropriate strategy for every website, and other Internet marketing strategies can be more effective, such as paid advertising through pay per click (PPC) campaigns, depending on the site operator's goals. Search engine marketing (SEM) is the practice of designing, running and optimizing search engine ad campaigns.[56] Its difference from SEO is most simply depicted as the difference between paid and unpaid priority ranking in search results. Its purpose regards prominence more so than relevance; website developers should regard SEM with the utmost importance with consideration to visibility as most navigate to the primary listings of their search.[57] A successful Internet marketing campaign may also depend upon building high quality web pages to engage and persuade, setting up analytics programs to enable site owners to measure results, and improving a site's conversion rate.[58] In November 2015, Google released a full 160 page version of its Search Quality Rating Guidelines to the public,[59] which revealed a shift in their focus towards "usefulness" and mobile search. In recent years the mobile market has exploded, overtaking the use of desktops, as shown in by StatCounter in October 2016 where they analysed 2.5 million websites and found that 51.3% of the pages were loaded by a mobile device [60]. Google has been one of the companies that are utilizing the popularity of mobile usage by encouraging websites to use their Google Search Console, the Mobile-Friendly Test, which allows companies to measure up their website to the search engine results and how user-friendly it is.

SEO may generate an adequate return on investment. However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals. Due to this lack of guarantees and certainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors.[61] Search engines can change their algorithms, impacting a website's placement, possibly resulting in a serious loss of traffic. According to Google's CEO, Eric Schmidt, in 2010, Google made over 500 algorithm changes – almost 1.5 per day.[62] It is considered a wise business practice for website operators to liberate themselves from dependence on search engine traffic.[63] In addition to accessibility in terms of web crawlers (addressed above), user web accessibility has become increasingly important for SEO.

International markets

Optimization techniques are highly tuned to the dominant search engines in the target market. The search engines' market shares vary from market to market, as does competition. In 2003, Danny Sullivan stated that Google represented about 75% of all searches.[64] In markets outside the United States, Google's share is often larger, and Google remains the dominant search engine worldwide as of 2007.[65] As of 2006, Google had an 85–90% market share in Germany.[66] While there were hundreds of SEO firms in the US at that time, there were only about five in Germany.[66] As of June 2008, the market share of Google in the UK was close to 90% according to Hitwise.[67] That market share is achieved in a number of countries.

As of 2009, there are only a few large markets where Google is not the leading search engine. In most cases, when Google is not leading in a given market, it is lagging behind a local player. The most notable example markets are China, Japan, South Korea, Russia and the Czech Republic where respectively Baidu, Yahoo! Japan, Naver, Yandex and Seznam are market leaders.

Successful search optimization for international markets may require professional translation of web pages, registration of a domain name with a top level domain in the target market, and web hosting that provides a local IP address. Otherwise, the fundamental elements of search optimization are essentially the same, regardless of language.[66]

Legal precedents

On October 17, 2002, SearchKing filed suit in the United States District Court, Western District of Oklahoma, against the search engine Google. SearchKing's claim was that Google's tactics to prevent spamdexing constituted a tortious interference with contractual relations. On May 27, 2003, the court granted Google's motion to dismiss the complaint because SearchKing "failed to state a claim upon which relief may be granted."[68][69]

In March 2006, KinderStart filed a lawsuit against Google over search engine rankings. KinderStart's website was removed from Google's index prior to the lawsuit, and the amount of traffic to the site dropped by 70%. On March 16, 2007, the United States District Court for the Northern District of California (San Jose Division) dismissed KinderStart's complaint without leave to amend, and partially granted Google's motion for Rule 11 sanctions against KinderStart's attorney, requiring him to pay part of Google's legal expenses.[70][71]

See also

Notes

  1. ^ "SEO - search engine optimization". Webopedia.
  2. ^ Beel, Jöran and Gipp, Bela and Wilde, Erik (2010). "Academic Search Engine Optimization (ASEO): Optimizing Scholarly Literature for Google Scholar and Co" (PDF). Journal of Scholarly Publishing. pp. 176–190. Retrieved April 18, 2010.CS1 maint: Multiple names: authors list (link)
  3. ^ "Inside AdWords: Building for the next moment" Google Inside Adwords May 15, 2015.
  4. ^ Ortiz-Cordova, A. and Jansen, B. J. (2012) Classifying Web Search Queries in Order to Identify High Revenue Generating Customers. Journal of the American Society for Information Sciences and Technology. 63(7), 1426 – 1441.
  5. ^ Brian Pinkerton. "Finding What People Want: Experiences with the WebCrawler" (PDF). The Second International WWW Conference Chicago, USA, October 17–20, 1994. Retrieved May 7, 2007.
  6. ^ "Intro to Search Engine Optimization | Search Engine Watch". searchenginewatch.com. Retrieved June 29, 2017.
  7. ^ Danny Sullivan (June 14, 2004). "Who Invented the Term "Search Engine Optimization"?". Search Engine Watch. Archived from the original on April 23, 2010. Retrieved May 14, 2007. See Google groups thread.
  8. ^ "Trademark/Service Mark Application, Principal Register". Retrieved May 30, 2014.
  9. ^ "Trade Name Certification". State of Arizona.
  10. ^ Cory Doctorow (August 26, 2001). "Metacrap: Putting the torch to seven straw-men of the meta-utopia". e-LearningGuru. Archived from the original on April 9, 2007. Retrieved May 8, 2007.
  11. ^ Pringle, G., Allison, L., and Dowe, D. (April 1998). "What is a tall poppy among web pages?". Proc. 7th Int. World Wide Web Conference. Retrieved May 8, 2007.CS1 maint: Multiple names: authors list (link)
  12. ^ Laurie J. Flynn (November 11, 1996). "Desperately Seeking Surfers". New York Times. Retrieved May 9, 2007.
  13. ^ Jason Demers (January 20, 2016). "Is Keyword Density Still Important for SEO". Forbes. Retrieved August 15, 2016.
  14. ^ "AIRWeb". Adversarial Information Retrieval on the Web, annual conference. Retrieved October 4, 2012.
  15. ^ David Kesmodel (September 22, 2005). "Sites Get Dropped by Search Engines After Trying to 'Optimize' Rankings". Wall Street Journal. Retrieved July 30, 2008.
  16. ^ Adam L. Penenberg (September 8, 2005). "Legal Showdown in Search Fracas". Wired Magazine. Retrieved August 11, 2016.
  17. ^ Matt Cutts (February 2, 2006). "Confirming a penalty". mattcutts.com/blog. Retrieved May 9, 2007.
  18. ^ a b "Google's Guidelines on Site Design". google.com. Retrieved April 18, 2007.
  19. ^ a b "Bing Webmaster Guidelines". bing.com. Retrieved September 11, 2014.
  20. ^ "Sitemaps". google.com. Retrieved May 4, 2012.
  21. ^ "By the Data: For Consumers, Mobile is the Internet" Google for Entrepreneurs Startup Grind September 20, 2015.
  22. ^ Brin, Sergey & Page, Larry (1998). "The Anatomy of a Large-Scale Hypertextual Web Search Engine". Proceedings of the seventh international conference on World Wide Web. pp. 107–117. Retrieved May 8, 2007.
  23. ^ "Google's co-founders may not have the name recognition of say, Bill Gates, but give them time: Google hasn't been around nearly as long as Microsoft". October 15, 2008.
  24. ^ Thompson, Bill (December 19, 2003). "Is Google good for you?". BBC News. Retrieved May 16, 2007.
  25. ^ Zoltan Gyongyi & Hector Garcia-Molina (2005). "Link Spam Alliances" (PDF). Proceedings of the 31st VLDB Conference, Trondheim, Norway. Retrieved May 9, 2007.
  26. ^ Hansell, Saul (June 3, 2007). "Google Keeps Tweaking Its Search Engine". New York Times. Retrieved June 6, 2007.
  27. ^ Danny Sullivan (September 29, 2005). "Rundown On Search Ranking Factors". Search Engine Watch. Archived from the original on May 28, 2007. Retrieved May 8, 2007.
  28. ^ Christine Churchill (November 23, 2005). "Understanding Search Engine Patents". Search Engine Watch. Archived from the original on February 7, 2007. Retrieved May 8, 2007.
  29. ^ "Google Personalized Search Leaves Google Labs". searchenginewatch.com. Search Engine Watch. Retrieved September 5, 2009.
  30. ^ "8 Things We Learned About Google PageRank". www.searchenginejournal.com. Retrieved August 17, 2009.
  31. ^ "PageRank sculpting". Matt Cutts. Retrieved January 12, 2010.
  32. ^ "Google Loses "Backwards Compatibility" On Paid Link Blocking & PageRank Sculpting". searchengineland.com. Retrieved August 17, 2009.
  33. ^ "Personalized Search for everyone". Google. Retrieved December 14, 2009.
  34. ^ "Our new search index: Caffeine". Google: Official Blog. Retrieved May 10, 2014.
  35. ^ "Relevance Meets Real-Time Web". Google Blog.
  36. ^ "Google Search Quality Updates". Google Blog.
  37. ^ "What You Need to Know About Google's Penguin Update". Inc.com.
  38. ^ "Google Penguin looks mostly at your link source, says Google". Search Engine Land. October 10, 2016. Retrieved April 20, 2017.
  39. ^ "FAQ: All About The New Google "Hummingbird" Algorithm". www.searchengineland.com. Retrieved March 17, 2018.
  40. ^ "Submitting To Directories: Yahoo & The Open Directory". Search Engine Watch. March 12, 2007. Archived from the original on May 19, 2007. Retrieved May 15, 2007.
  41. ^ "What is a Sitemap file and why should I have one?". google.com. Retrieved March 19, 2007.
  42. ^ "Search Console - Crawl URL". Google. Retrieved December 18, 2015.
  43. ^ "Submitting To Search Crawlers: Google, Yahoo, Ask & Microsoft's Live Search". Search Engine Watch. March 12, 2007. Archived from the original on May 10, 2007. Retrieved May 15, 2007.
  44. ^ Cho, J., Garcia-Molina, H. (1998). "Efficient crawling through URL ordering". Proceedings of the seventh conference on World Wide Web, Brisbane, Australia. Retrieved May 9, 2007.CS1 maint: Multiple names: authors list (link)
  45. ^ "Mobile-first Index". Google.com. Retrieved March 19, 2018.
  46. ^ Phan, Doantam (November 4, 2016). "Mobile-first Indexing". Official Google Webmaster Central Blog. Google. Retrieved January 16, 2019.
  47. ^ "Newspapers Amok! New York Times Spamming Google? LA Times Hijacking Cars.com?". Search Engine Land. May 8, 2007. Retrieved May 9, 2007.
  48. ^ a b "The Most Important SEO Strategy". clickz.com. ClickZ. Retrieved April 18, 2010.
  49. ^ "Bing – Partnering to help solve duplicate content issues – Webmaster Blog – Bing Community". www.bing.com. Retrieved October 30, 2009.
  50. ^ Andrew Goodman. "Search Engine Showdown: Black hats vs. White hats at SES". SearchEngineWatch. Archived from the original on February 22, 2007. Retrieved May 9, 2007.
  51. ^ Jill Whalen (November 16, 2004). "Black Hat/White Hat Search Engine Optimization". searchengineguide.com. Retrieved May 9, 2007.
  52. ^ "What's an SEO? Does Google recommend working with companies that offer to make my site Google-friendly?". google.com. Retrieved April 18, 2007.
  53. ^ Andy Hagans (November 8, 2005). "High Accessibility Is Effective Search Engine Optimization". A List Apart. Retrieved May 9, 2007.
  54. ^ Matt Cutts (February 4, 2006). "Ramping up on international webspam". mattcutts.com/blog. Retrieved May 9, 2007.
  55. ^ Matt Cutts (February 7, 2006). "Recent reinclusions". mattcutts.com/blog. Retrieved May 9, 2007.
  56. ^ "Introduction to Search Engine Optimization: Getting Started With SEO to Achieve Business Goals" (PDF).
  57. ^ Tapan, Panda (July 2013). "Search Engine Marketing: Does the Knowledge Discovery Process Help Online Retailers?". IUP Journal of Knowledge Management; Hyderabad. 11 (3): 56–66 – via Proquest.
  58. ^ Melissa Burdon (March 13, 2007). "The Battle Between Search Engine Optimization and Conversion: Who Wins?". Grok.com. Archived from the original on March 15, 2008. Retrieved April 10, 2017.
  59. ^ "Search Quality Evaluator Guidelines" How Search Works November 12, 2015.
  60. ^ Titcomb, James. "Mobile web usage overtakes desktop for first time". www.telegraph.co.uk. The Telegraph. Retrieved March 17, 2018.
  61. ^ Andy Greenberg (April 30, 2007). "Condemned To Google Hell". Forbes. Archived from the original on May 2, 2007. Retrieved May 9, 2007.
  62. ^ Matt McGee (September 21, 2011). "Schmidt's testimony reveals how Google tests algorithm changes".
  63. ^ Jakob Nielsen (January 9, 2006). "Search Engines as Leeches on the Web". useit.com. Retrieved May 14, 2007.
  64. ^ Graham, Jefferson (August 26, 2003). "The search engine that could". USA Today. Retrieved May 15, 2007.
  65. ^ Greg Jarboe (February 22, 2007). "Stats Show Google Dominates the International Search Landscape". Search Engine Watch. Retrieved May 15, 2007.
  66. ^ a b c Mike Grehan (April 3, 2006). "Search Engine Optimizing for Europe". Click. Retrieved May 14, 2007.
  67. ^ Jack Schofield (June 10, 2008). "Google UK closes in on 90% market share". London: Guardian. Retrieved June 10, 2008.
  68. ^ "Search King, Inc. v. Google Technology, Inc., CIV-02-1457-M" (PDF). docstoc.com. May 27, 2003. Retrieved May 23, 2008.
  69. ^ Stefanie Olsen (May 30, 2003). "Judge dismisses suit against Google". CNET. Retrieved May 10, 2007.
  70. ^ "Technology & Marketing Law Blog: KinderStart v. Google Dismissed—With Sanctions Against KinderStart's Counsel". blog.ericgoldman.org. Retrieved June 23, 2008.
  71. ^ "Technology & Marketing Law Blog: Google Sued Over Rankings—KinderStart.com v. Google". blog.ericgoldman.org. Retrieved June 23, 2008.

External links

Anchor text

The anchor text, link label, link text, or link title is the visible, clickable text in a hyperlink. The words contained in the anchor text can determine the ranking that the page will receive by search engines. Since 1998, some web browsers have added the ability to show a tooltip for a hyperlink before it is selected. Not all links have anchor texts because it may be obvious where the link will lead due to the context in which it is used. Anchor texts normally remain below 50 characters. Different browsers will display anchor texts differently. Usually, web search engines analyze anchor text from hyperlinks on web pages. Other services apply the basic principles of anchor text analysis as well. For instance, academic search engines may use citation context to classify academic articles, and anchor text from documents linked in mind maps may be used too.

Backlink

A backlink for a given web resource is a link from some other website (the referrer) to that web resource (the referent). A web resource may be (for example) a website, web page, or web directory.A backlink is a reference comparable to a citation. The quantity, quality, and relevance of backlinks for a web page are among the factors that search engines like Google evaluate in order to estimate how important the page is. PageRank calculates the score for each web page based on how all the web pages are connected among themselves, and is one of the variables that Google Search uses to determine how high a web page should go in search results. This weighting of backlinks is analogous to citation analysis of books, scholarly papers, and academic journals. A Topical PageRank has been researched and implemented as well, which gives more weight to backlinks coming from the page of a same topic as a target page. Some other words for backlink are incoming link, inbound link, inlink, inward link, and citation.

Content farm

In the context of the World Wide Web, a content farm (or content mill) is a company that employs large numbers of freelance writers to generate large amounts of textual content which is specifically designed to satisfy algorithms for maximal retrieval by automated search engines. Their main goal is to generate advertising revenue through attracting reader page views, as first exposed in the context of social spam.Articles in content farms have been found to contain identical passages across several media sources, leading to questions about the sites placing search engine optimization goals over factual relevance. Proponents of the content farms claim that from a business perspective, traditional journalism is inefficient. Content farms often commission their writers' work based on analysis of search engine queries that proponents represent as "true market demand", a feature that traditional journalism purportedly lacks.

Copywriting

Copywriting is the act of writing text for the purpose of advertising or other forms of marketing. The product, called copy, is written content that aims to increase brand awareness and ultimately persuade a person or group to take a particular action.

Copywriters help create billboards, brochures, catalogs, jingle lyrics, magazine and newspaper advertisements, sales letters and other direct mail, scripts for television or radio commercials, taglines, white papers, social media posts, and other marketing communications.

Danny Sullivan (technologist)

Danny Sullivan is an American technologist, journalist, and entrepreneur. He was the Chief Content Officer at Third Door Media, and co-founded Search Engine Land, an industry publication that covers news and information about search engines, and search marketing, SEO and SEM topics. Third Door Media also produces Marketing Land, a sister website that covers broader digital marketing topics including social media, display advertising, email marketing, analytics, mobile, and marketing technology. Search Engine Land and Marketing Land are owned by Third Door Media, of which Danny Sullivan was partner and chief content officer. He retired from his position as Chief Content Officer at Third Door Media in June 2017.

In October 2017, Sullivan announced he will be joining Google as an adviser at the search division of the company. Danny is Google's public Search Liaison, who helps people better understand search and helps Google better hear public feedback.Danny was one of the 50 marketing influencers, according to Entrepreneur, in 2015.

Google Hummingbird

Hummingbird is the codename given to a significant algorithm change in Google Search in 2013. Its name was derived from the speed and accuracy of the hummingbird. The change was announced on September 26, 2013, having already been in use for a month. "Hummingbird" places greater emphasis on natural language queries, considering context and meaning over individual keywords. It also looks deeper at content on individual pages of a website, with improved ability to lead users directly to the most appropriate page rather than just a website's homepage.

The upgrade marked the most significant change to Google search in years, with more "human" search interactions and a much heavier focus on conversation and meaning. Thus, web developers and writers were encouraged to optimize their sites with natural writing rather than forced keywords, and make effective use of technical web development for on-site navigation.

Google Panda

Google Panda is a major change to Google's search results ranking algorithm that was first released in February 2011. The change aimed to lower the rank of "low-quality sites" or "thin sites", in particular "content farms", and return higher-quality sites near the top of the search results.

CNET reported a surge in the rankings of news websites and social networking sites, and a drop in rankings for sites containing large amounts of advertising. This change reportedly affected the rankings of almost 12 percent of all search results. Soon after the Panda rollout, many websites, including Google's webmaster forum, became filled with complaints of scrapers/copyright infringers getting better rankings than sites with original content. At one point, Google publicly asked for data points to help detect scrapers better. In 2016, Matt Cutts, Google's head of webspam at the time of the Panda update, commented that "with Panda, Google took a big enough revenue hit via some partners that Google actually needed to disclose Panda as a material impact on an earnings call. But I believe it was the right decision to launch Panda, both for the long-term trust of our users and for a better ecosystem for publishers."Google's Panda received several updates after the original rollout in February 2011, and their effect went global in April 2011. To help affected publishers, Google provided an advisory on its blog, thus giving some direction for self-evaluation of a website's quality. Google has provided a list of 23 bullet points on its blog answering the question of "What counts as a high-quality site?" that is supposed to help webmasters "step into Google's mindset".The name "Panda" comes from Google engineer Navneet Panda, who developed the technology that made it possible for Google to create and implement the algorithm.

HubSpot

HubSpot is a developer and marketer of software products for inbound marketing and sales. It was founded by Brian Halligan and Dharmesh Shah in 2006. Its products and services aim to provide tools for social media marketing, content management, web analytics and search engine optimization.

Keyword stuffing

Keyword stuffing is a search engine optimization (SEO) technique, considered webspam or spamdexing, in which keywords are loaded into a web page's meta tags, visible content, or backlink anchor text in an attempt to gain an unfair rank advantage in search engines. Keyword stuffing may lead to a website being banned or penalized on major search engines either temporarily or permanently. The repetition of words in meta tags may explain why many search engines no longer use these tags.

Many major search engines have implemented algorithms that recognize keyword stuffing, and reduce or eliminate any unfair search advantage that the tactic may have been intended to gain, and oftentimes they will also penalize, demote or remove websites from their indexes that implement keyword stuffing.

Changes and algorithms specifically intended to penalize or ban sites using keyword stuffing include the Google Florida update (November 2003) Google Panda (February 2011) Google Hummingbird (August 2013) and Bing's September 2014 update.

Link building

In the field of search engine optimization (SEO), link building describes actions aimed at increasing the number and quality of inbound links to a webpage with the goal of increasing the search engine rankings of that page or website. Briefly, link building is the process of establishing relevant hyperlinks (usually called links) to a website from external sites. Link building can increase the number of high-quality links pointing to a website, in turn increasing the likelihood of the website ranking highly in search engine results. Link building is also a proven marketing tactic for increasing brand awareness.

Link farm

On the World Wide Web, a link farm is any group of web sites that all hyperlink to every other site in the group. In graph theoretic terms, a link farm is a clique. Although some link farms can be created by hand, most are created through automated programs and services. A link farm is a form of spamming the index of a web search engine (sometimes called spamdexing). Other link exchange systems are designed to allow individual websites to selectively exchange links with other relevant websites and are not considered a form of spamdexing.

Search engines require ways to confirm page relevancy. A known method is to examine for one-way links coming directly from relevant websites. The process of building links should not be confused with being listed on link farms, as the latter requires reciprocal return links, which often renders the overall backlink advantage useless. This is due to oscillation, causing confusion over which is the vendor site and which is the promoting site.

Matt Cutts

Matthew Cutts (born 1972 or 1973) is an American software engineer. Cutts is the Administrator of the United States Digital Service. He was first appointed as acting administrator, to later be confirmed as full administrator in October 2018. Cutts previously worked with Google as part of the search quality team on search engine optimization issues. He is the former head of the web spam team at Google.

Nofollow

nofollow is a value that can be assigned to the rel attribute of an HTML a element to instruct some search engines that the hyperlink should not influence the ranking of the link's target in the search engine's index. It is intended to reduce the effectiveness of certain types of internet advertising because their search algorithm depends heavily on the number of links to a website when determining which websites should be listed in what order in their search results for any given term.

Organic search

Organic search is a method for entering one or several search terms as a single string of text into a search engine. Organic search results, appear as paginated lists, are based on relevance to the search terms; and exclude advertisements. Whereas, non-organic search results do not filter out pay per click advertising.

Search engine optimization metrics

A number of metrics are available to marketers interested in search engine optimization. Search engines and software creating such metrics all use their own crawled data to derive at a numeric conclusion on a website's organic search potential. Since these metrics can be manipulated, they can never be completely reliable for accurate and truthful results.

Site map

A site map (or sitemap) is a list of pages of a web site.

There are three primary kinds of site map:

Site maps used during the planning of a Web site by its designers.

Human-visible listings, typically hierarchical, of the pages on a site.

Structured listings intended for web crawlers such as search engines.

Spam blog

A spam blog, also known as an auto blog or the neologism splog, is a blog which the author uses to promote affiliated websites, to increase the search engine rankings of associated sites or to simply sell links/ads.

The purpose of a splog can be to increase the PageRank or backlink portfolio of affiliate websites, to artificially inflate paid ad impressions from visitors (see made for AdSense or MFA-blogs), and/or use the blog as a link outlet to sell links or get new sites indexed. Spam blogs are usually a type of scraper site, where content is often either inauthentic text or merely stolen (see blog scraping) from other websites. These blogs usually contain a high number of links to sites associated with the splog creator which are often disreputable or otherwise useless websites.

There is frequent confusion between the terms "splog" and "spam in blogs". Splogs are blogs where the articles are fake, and are only created for search engine spamming. To spam in blogs, conversely, is to include random comments on the blogs of innocent bystanders, in which spammers take advantage of a site's ability to allow visitors to post comments that may include links. In fact, one of the earliest uses of the term "splog" referred to the latter.This is used often in conjunction with other spamming techniques, including spings.

Spam in blogs

Spam in blogs (also called simply blog spam, comment spam, or social spam) is a form of spamdexing. (Note that blogspam also has another meaning, namely the post of a blogger who creates posts that have no added value to them in order to submit them to other sites.) It is done by posting (usually automatically) random comments, copying material from elsewhere that is not original, or promoting commercial services to blogs, wikis, guestbooks, or other publicly accessible online discussion boards. Any web application that accepts and displays hyperlinks submitted by visitors may be a target.

Adding links that point to the spammer's web site artificially increases the site's search engine ranking on those where the popularity of the URL contributes to its implied value, an example algorithm would be the PageRank algorithm as used by Google Search. An increased ranking often results in the spammer's commercial site being listed ahead of other sites for certain searches, increasing the number of potential visitors and paying customers.

Spamdexing

In digital marketing and online advertising, spamdexing (also known as search engine spam, search engine poisoning, black-hat search engine optimization (SEO), search spam or web spam) is the deliberate manipulation of search engine indexes. It involves a number of methods, such as link building and repeating unrelated phrases, to manipulate the relevance or prominence of resources indexed, in a manner inconsistent with the purpose of the indexing system.It could be considered to be a part of search engine optimization, though there are many search engine optimization methods that improve the quality and appearance of the content of web sites and serve content useful to many users. Search engines use a variety of algorithms to determine relevancy ranking. Some of these include determining whether the search term appears in the body text or URL of a web page. Many search engines check for instances of spamdexing and will remove suspect pages from their indexes. Also, search-engine operators can quickly block the results-listing from entire websites that use spamdexing, perhaps alerted by user complaints of false matches. The rise of spamdexing in the mid-1990s made the leading search engines of the time less useful. Using unethical methods to make websites rank higher in search engine results than they otherwise would is commonly referred to in the SEO (search engine optimization) industry as "black-hat SEO". These methods are more focused on breaking the search-engine-promotion rules and guidelines. In addition to this, the perpetrators run the risk of their websites being severely penalized by the Google Panda and Google Penguin search-results ranking algorithms.Common spamdexing techniques can be classified into two broad classes: content spam (or term spam) and link spam.

Search engine optimization
Exclusion standards
Marketing topics
Search marketing
Search engine spam
Linking
People
Other

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.