I know when I started out on the Internet (1998) things were really different – web sites were much different, the vast majority of businesses were not really using the Internet for marketing their products and services, and the search engines were terrible – I think I used to use Altavista. I remember the first online chat network was AOL. A lot has changed for sure.
When I started learning about code, (html, php, asp, etc.) I was fascinated with how the entire Internet was so brilliantly conceived, and I was doing a lot of head scratching back then. I had no idea why some sites were at the top of the search engine results of the day.
When I looked into it (around summer of 2000) I was surprised to learn that the web sites enjoying the top positions were mostly cheating. They were using keyword stuffing (putting a ton of primary and secondary keywords in the footers of their web pages), cloaking, spam linking, etc., etc., etc.
The tricksters made some good money manipulating the search results. Then two young men changed all that.
Using Larry Page’s “page rank” algorithm they were able to tell which web sites were the authority sites based on how many web pages were linking to a specific domain.
Well the manipulators soon learned how to trick Google – they used artificial linking. Unfortunately for them it didn’t last long. Google caught up with them and changed their algorithm yet again.
Ever since Google began dominating the search market (65% World Wide, and 80% in Canada) so-called search engine optimization specialists have been speculating on what works to bring web sites to the top of the results.
Of course, no SEO company really knows what brings a web site to the top of Google’s search results – the best we can do is guess.
So how do you guess?
Trial and Error & Domain Platform
Since there are only about 5-6 programmers in the world who literally know what Google loves, SEO companies have to rely competely on guess work, testing, experimenting, and most of all TAKING notes!
So first we tried static webs sites, which are non-CMS (content management systems) – no blogs, no WordPress, no Drupal, nothing – just clean, simple html code. No ability to “talk to the search engines”, directories, and the like.
We found out Google has no problem with a static web site, but it takes Google a very LONG time to finally come by and cache all of these static pages. So if you don’t mind waiting for Google to rank all of your pages, a static site is fine.
However, if you want to get your web site climbing the Google ladder in a reasonable amount of time (6-12 months), you have to be using a powerful CMS. You have to have a site that is using dynamic code (php, asp, etc.) that has the ability to “talk to the search engines” automatically.
So fine – we learned that dynamic web sites do better than static sites, when it comes to getting ranked quickly in Google, Yahoo, and Bing, BUT there has to be more to search engine optimization than that!
Yes, there is.
The Difference Between Good Linking and Bad Linking
Since Google was (and still is) getting the most eyeballs in the world using their search engine, the tricksters focused on gaming Google’s search results. They focused (and still do) on getting links to the sites they are trying to raise in the search results. At the time of this publication manipulating links isn’t working very well for the tricksters.
Because Google is now very good at identifying which links are good links, and which links are “bad” links. I’ll spare you the technical ramble regarding inbound and outbound linking, and explain what a good link looks like.
A good link is a link that goes from a web page that has nothing but pure, original, and well written content, to another web page. And with good quality links you don’t need very many to increase the value of a web page “in Google’s prying eyes” (bots).
So if quality means more than quanitity in Google’s eyes, and they are judging links by how good the content is on the page that is linking out, then how does Google decide what “good” content is? This is the MOST important discussion in 2012!
How Google Judges Content
They look at every bit of source code, the grammar, the punctuation, the images, image tags, content tags, structure, the overall screenshot of the page (LSI – latent semantic indexing), etc.
So if you write a fresh and unique piece of content (like the one you are reading now) and you include some interesting pictures, or graphs, or videos, etc., Google will like it, and they will score it as a valuable page on the Internet.
Furthermore (and this is the key), Google sees more value in the links leaving that page.
In essence, when you create a quality page of content, and link to other pages on your site from your article, you are creating your own valuable links.
Are you following along here?
So remember this sentence;
“You don’t need to ask other webmasters to link to your pages anymore…..you create your OWN valuable links.”
For instance, on our clients’ web sites, and our own web sites, we never create “fabricated links”. Never. We just create quality content. We’ve been handling our SEO (search engine optimization) this way for the past three years, and MAN HAS IT WORKED!
So what we do for our customers is write excellent content and publish it on the web. The old SEO method of simple gathering mediocre getting links can actually hurt a domain, and hurt your business.
As I like to tell our team (and they’re probably sick of it by now);
“The good writers always win.”
If you are just starting out as an SEO company, and you are trying to figure out where to invest your dollars, the best thing to spend money on is GOOD WRITERS, and GOOD bloggers.