Search engine optimization (SEO) is an important technique for webmasters are responsible to create and manage websites. The basic SEO glossary is a reference for webmasters and SEO beginners to understand search engine terminologies.
200 Status Code
200 means status ok in which the file request was successful.
301 redirect means moved permanently and should be used when you are moving your entire site to a different location (e.g. moving your site to a new domain name).
302 redirect signals a temporary redirect. For example, the file was found but was temporarily located at another URI.
403 Server Code
403 means the document request was forbidden – Access to a URL was prevented.
404 Server Code
404 means a document was not found – The server could not find the URL requested.
Algorithm is a set of rules that a search engine uses to rank web pages according to user’s search queries. Google’s search algorithms use over 200 factors including Google PageRank to rank websites and Google regularly updates the search algorithms.
ALT attribute is a HTML tag (ALT tag) that uses with images and helps screen readers and search engines understand the meaning of an image by providing a text equivalent for the object. The image text description is visible when you “hover” the image.
ALT attribute is also called ALT text.
Anchor text is the text (words) that a user clicks on to follow a link.
Webmasters use anchor text as signals for search engines to determine a web page’s relevance. Search engines believe your web page is authoritative for the words that other sites include in links pointing at your website.
Backlinks are links pointing from one website to another website. The more high quality backlinks a web page receive, the higher the page ranks in the SERP.
Review the backlinks pointing to your site from other sites through Google Webmaster Tools.
Backlinks are also called inbound links.
Canonicalization, according to Matt Cutts, is:
The process of picking the best URL when there are several choices; this usually refers to home pages.
SEO Book explains canonical URLs are caused by:
- Different host names i.e.
- Redirects pointing to different URLs i.e. 302 used inappropriately
- Forwarding multiple URLs to the same content, and/or publishing the same content on multiple domains
- Improperly configured dynamic URLs i.e. any url rewriting based on changing conditions
- Two index pages appearing in the same location i.e.
- Different protocols i.e.
- Multiple slashes in the filepath i.e.
- Scripts that generate alternate URLs for the same content i.e. some blogging and forum software, ecommerce software that adds tracking URLs
- Port numbers in the domain name i.e.
example.com/4430 :can sometimes be seen in virtual hosting environments.
- Capitalization – i.e.
- URLs “built” from the path you take to reach a page i.e. tracking software may incorporate the click path in the URL for statistical purposes.
- Trailing questions marks, with or without parameters i.e.
www.example.com/?source=cnn(a common tagging strategy amongst ad buys)
Webmasters should use consistent internal site linking structure to help search engines index the canonical (correct) version of your site’s URLs and prevent link juice leaking through canonical URLs.
Cascading Style Sheets (CSS)
CSS is a method to add styles to HTML web pages and contains information including paragraph layout, colors and font sizes about your HTML documents.
Cloaking is when a site shows different versions of a web page to search engines and users. Unless the objective is to SEO geo-target users based locations, search engines regard other types of cloaking illegal and may ban your sites from showing up on SERP.
Content Management Systems (CMS)
CMS is a web application that makes it easy for webmasters to update information to websites.
Crawlability refers to the ability of a website to be indexed by a search engine’s crawler. Your site is fully crawlable when search engine crawlers can follow all the links on your website.
Crawlers are also known as spiders or bots. Search engine crawlers search the web for pages / content to include in the search engine index. Crawlers are automated programs that follow links on web pages to other web pages. The spider names of Google, Bing, Yahoo and Baidu are:
- Google: GoogleBot
- Bing: MSNBot
- Yahoo: Slurp
- Baidu: Baiduspider
To include your site in search engines index, ensure crawlers can follow links on your site.
Domain refers to a specific website address.
Doorway pages are web pages created to rank for highly targeted keywords, to redirect users to another web page with other content than they expect. Doorway page is also called bridge page or gateway page.
Entry page is the first page a user lands when visiting your site, and is also called landing page.
With Flash, animation and interactivity can be added to web pages. However, most search engine spiders have issues crawling content in Flash files.
With frames, more than one web pages can be embedded and displayed in one browser window.
Avoid creating your web pages with frames as search engine spiders have issues crawling / determining the text content on web pages embedded in frames.
Forward DNS is the process to translate domain/host names to the numeric addresses of websites, IP addresses.
Head terms are search queries that receive extremely high search volume, for example, “phone”, “music”.
Google Webmaster Tools search queries report allows you to view users’ head term search queries.
Hidden text (or invisible text) is text that is invisible to users but visible to search engines, and is a technique applied to fool search engine spiders.
Google has methods to detect hidden text and regards using hidden text as a way to spam SERP.
.htaccess file is an Apache directory-level configuration file that can be used to password protect or redirect files. .htaccess file consists of one or more configuration directives located in a website document directory. The directives apply to that directory and all sub-directories.
HTML Site Map
HTML sitemap is a web page that:
- Consists of links pointing to all the web pages of your site.
- Serves as an alternative route (other than the main site navigation) for search engine spiders to crawl your web pages.
Examples of HTML site map documents:
Http represents Hypertext Transfer Protocol.
Https represents Hypertext Transfer Protocol Secure.
Http Referrer is a program that returns the source of traffic coming to the user’s site.
A search engine’s index is a organized and informative database that stores extremely large amount of web documents found by the search engine’s crawler from the web.
For example, your web pages have to be in Google’s index before they can show up in Google’s SERP.
Internet Protocol Addresses (IP Addresses) are unique sets of location-based numbers assigned to networks or computers for communicating across the Internet using TCP/IP protocol. An IP consists of a 32-bit numeric address with 4 numbers separated by periods. The value of each number ranges from 0 to 255.
For example, 192.168.8.1 up to 192.168.15.254 are valid IP addresses.
Keyword is a word that implies a specific topic. For example, “PPC” is the main topic of this blog.
Keyword density is the number of times a keyword or keyword phrase that is used within the content of a web page.
Keyword Density = A Specific Keyword / Total number of keywords
The higher the number of times a keyword appears in a web page, the higher the keyword density.
Keyword stemming is to expand from a “stem” word and create additional keywords related to the “stem” keyword.
For example, “digital camera” is a stem keyword, and we expand to create new keywords:
Nikon digital camera 14MP
Canon digital camera
Nikon digital camera 3X
Used Nikon digital camera
Keyword stuffing is a black hat SEO technique used by webmasters to boost a website’s organic search engine ranking. Webmasters “stuff keywords” by ensuring the same keywords appear on a web page excessive number of times where user experience is usually sacrificed.
Keyword stuffing is completely outdated and adds no value to a website’s search engine rankings today.
Link popularity is the quantity and quality of links pointing to your website from other websites. When many high quality links from other websites (external inbound links) and from your own websites (internal inbound links) link to your site, search engines believe your site is important and your web pages may ranked higher in SERP.
Link bait is when you create content on your site that people will find “valuable” and very interested. Your content have created “values”, eventually other webmasters / people will link to your content.
A Log file (or web server log) records all information about a website’s incoming and outgoing traffic, including search engine spider activities, that can be analyzed by webmasters to improve a site’s SEO.
Long Tail Keyword
Long tail keywords are highly specific search queries that receive low search volume, for example, “Canon PowerShot SD980 12.1 Megapixel”. Long tail search queries that trigger long tail keywords usually have very high click through rates.
META Description Tag
META description tag is where you place a brief description of your web page.
In Google’s SERP, META description tags appear for the web pages that are in the search results. Writing a relevant and unique META description tag for each of your web page is important and can increase your search results’ click through rates.
META Keywords Tag
META keywords tag is an HTML tag that holds keyword phrases, separated by commas on your web pages.
Most search engines do not find META keywords tag useful. Google confirmed META keywords tags do not contribute to organic search rankings.
rel="nofollow") is an attribute for webmasters to tell search engines:
- Not to follow any links on a web page
- Not to follow a specific link on a web page
Google suggests webmasters should consider using “nofollow” for untrusted content, paid links or crawl prioritization.
Organic results are non-paid search results of web pages that appear in the SERPs.
PageRank is Google’s patented technology that was developed at Stanford University and is Google’s SEO ranking algorithm that reflects Google’s view of the importance of web pages. Important web pages in Google’s index are:
- Assigned higher PageRank
- More likely to rank higher in Google’s search engine results pages (SERP)
Increase Google PageRank explains:
- When a linked web page that receives two links, one from a more important web page and the other from a less important web page, the link juice from the more important page is considered to be a higher value vote to the linked page.
- Based on the entire web’s link structures, Google uses over 200 signals/factors including the PageRank algorithm to identify which web pages are most important. Then Google combines web pages’ importance, page-based text matching techniques and search query specific relevance to rank web pages in SERP.
Penalties are actions algorithmically or manually applied to a site suspected of using spamming techniques and prevent the website from ranking high in the SERP.
Search engines give penalties to websites that implement spamming techniques.
The ban/penalty may be lifted after you have fixed the issues for being penalized. For example, webmasters can submit a reinclusion request to Google.
Reverse DNS is the process to translate the numeric addresses of websites, IP addresses, to domain/host names. Find out:
- What Reverse DNS serves for
- How to configure the Reverse DNS
- Where to get Reverse DNS lookup tools
Robots.txt is a text file in the root directory of a website that tells search engines:
- The part of a site that should allow search engine spiders’ crawling.
- The part of a site that should be blocked from search engine spiders’ access.
You can configure robots.txt with instructions to block crawl access from:
- Google’s GoogleBot, Yahoo’s Slurp and Bing’s MSNbot, as the big 3 search engines obey the instructions in Robots.txt.
- Baiduspider: Review Baidu Robots.txt
Search Engine Optimization (SEO)
Search engine optimization (SEO) is a search engine marketing technique to improve a website’s visibility within one or more search engines.
Fine more official and unofficial definitions for SEO from Google SEO for beginners.
Search Engine Results Page (SERP)
A search engine results page (SERP) is the page with search results that is returned to a user, after the user entered a search query into search engine’s search box.
For example, Google’s SERP consists of both organic search results (SEO) and paid search results (PPC).
Search Engine Submission
Search engine submission is a method for search engines to “know” the existence of your new website. Google, Yahoo Bing and Baidu all provide submission pages to webmasters:
Google, Bing and Yahoo allow local-based businesses to manually submit websites to local business search:
Search engines defined by Wikipedia:
A search engine is designed to search for information on the World Wide Web and FTP servers. The search results are presented in a list of results. The information may consist of web pages, images, information and other types of files. Search engines operate algorithmically or are a mixture of algorithmic and human input.
A search query is a keyword or keyword phrase that a user enters into a search engine’s search box field. The search engine then determines the search results to return to the user.
For example, Google Webmaster Tools search queries report shows you users’ search queries in which your web pages showed up in Google’s SERP.
Server-side tracking refers to monitoring web traffic through web server’s log files.
Session ID is a unique number assigned to each individual user’s visit (or session) to a website.
Websites that use session IDs can cause duplicate content issues for search engines, as every time a search engine spider requests/crawls a web page from the site, a different URL (with a different session ID) is generated.
Spams (or webspams) are deliberate search engine marketing techniques to improve search engine rankings by violating search engines’ guidelines. Spamming techniques include:
- Keyword stuffing
- Hidden text
- Doorway pages
- Link farms
- Duplicate content
Stop words are words that search engines consider no importance to a web page’s ranking in SERP and do not contribute to keyword density of your web pages.
Examples of stop words include “the”, “and”, “of”, “is”, “for”.
A theme is an overall topic of a web page. Having an obvious topic for your web page will help search engines determine the theme of your page and rank your page in the SERP.
For example, Google looks at the keyword density and keyword stemming of a web page and determines the web page’s theme.
Title tag in an HTML document is where you place the page title of a web page.
- In Google SERP, the content of your web page’s title tag usually shows up in the first line of a search result.
- Using a relevant and unique title tag for each web page is important and can increase the click through rate of your search results.
- Title tag is one of the factors in organic search engine’s ranking algorithm.
Traffic is the number of unique visitors or visits (sessions) that a website receives. Programs that can provide traffic information of your site:
- Website’s log file
- Google Analytics
URL rewrite is a method, mod_rewrite, that is applied to a site’s URLs. Webmasters apply URL rewrites to:
- Change dynamic URLs that are difficult for search engines’ spiders to index to static URLs.
- Sites that are to change directory structures for content re-organization.
User agent refers to the software programs’ identity when visiting a website. Spiders and browsers are classified as user agents.
- Spiders: e.g. GoogleBot, Baiduspider
- Browsers: e.g. Mozilla Firefox, Google Chrome, Internet Explorer
XML Site Map
An XML site map is a document in XML (Extensible Hypertext Markup Language) format that lists all the web pages of your site. Google encourages webmasters using XML site maps:
Creating and submitting a Sitemap helps make sure that Google knows about all the pages on your site, including URLs that may not be discoverable by Google’s normal crawling process.
SEO Glossary Lists
Webmasters should also review SEO glossary lists created by other SEO experts.
- SEO Book – Licensed under Creative Commons Attribution 2.5
- SEOmoz Blog
- SEO Theory
- Search Engine Dictionary
SEO Tips for Webmasters
- Google SEO for beginners: Offers Google’s official SEO definition and fundamental SEO tips, including SEO ranking factors, SEO guides for download and why you should use Google Webmaster Tools.
- Best Google SEO resources Offers tips on domain names and web hosting selection criteria, keyword research, site architecture and url structure, site navigation, canonicalization and content duplication, linking building and paid link issues, google penalties, seo geo-targeting, social media for seo, seo for blogs, universal search, and google sitelinks.
- Google SEO beginners guide update: Google updated the SEO starter guide for all webmasters.