10 mistakes to avoid building a website

Optimise your website

Building a technically sound website is vital for any successful SEO campaign and here TEN of the most basic mistakes to avoid.

 

1. No Sitemap

A sitemap is an XML file that feeds data to search engines about a websites most important pages such as the data of the last update and the importance of a page compared to others. All of this allows its spider to crawl the site more intelligently and economically. Although creating a sitemap can’t solely guarantee search engine success, it is certainly a quick win and relatively easy to implement for a developer.

2. Failed Domain Check (canonical)

Yes, canonical domain check is when a domain fails, that means the homepage is accessible through more than one URL. For example, onedigitalmedia.co.uk/index.php, onedigitalmedia.co.uk/home.php and onedigitalmedia.co.uk/ all load the homepage.

Having multiple URLs load identical content is a problem because inbound link equality can become divided and it diminishes the overall SEO site value. Some solutions to this would be to implement the correct canonical tags (google explain this well) or having 301 redirects pointing the duplicate pages to the one location.

3. Slow load times

Although broadband speeds in the UK are increasing, Google announced in 2010 that loading times are included in their algorithm, with slow loading times for desktop and mobiles still being a problem.

Using Google’s PageSpeed insights tool, any page can be analysed – identifying reasons and solutions for slow speeds. Common solutions include eliminating render-blocking JavaScript and CSS (style sheets for your website) above-the-fold leveraging browser caching, optimising images, enabling compression and minifying JavaScritp, CSS and HTML. That means removing any unnecessary spaces e.g. double spacing after a period which is a throw-back to word processing on a typewriter, one of my irritations when I try and advise writers of content for the web.

4. No Header Tags

Header tags such as <h1>, <h2> and <h3> give content its structure and help the search engines understand which parts are important.

Search engines use header tags to prioritise a pages content so incorrect use can result in confusion. The solution is to ensure that the main <h1> tag in unique and accurately incorporates the topic of the page including relevant keywords. Likewise any following subheadings should use <h2> and <h3> tags where applicable.   

5. Missing ALT Attributes 

Search engines cannot understand images so its imperative to attach a descriptive and relevant text in the form of an ALT attribute. This allows the search engine to fully understand the image and its an opportunity to add relevancy and keyword rich descriptions to your page. One common mistake is to be too vague with the description – for example, blue trainers can be further enhanced into blue and white, limited-edition Nike running trainers.

6. Poor Meta Data

Title tags are still regarded as one of the most significant influences for on-site SEO success. This is the first thing users see on search result pages so it should be unique, succinct (70 characters max) and keyword optimised according to the content subject matter of the page.

Meta descriptions are also important as they are essentially an opportunity to entice the user to click through to the page in the form of an ad copy.

7. Keyword Stuffing

Ensuring that a fine balance of target keywords is incorporated into a body of a page is a challenging feat for any copywriter. Too much and the page appearing spammy is a risk, but too little and the search engine may have trouble understanding what to rank for. The solution is to use a good range of synonyms presented in a well-written structure that will engage the reader.

8. Duplicate Content

Having duplicate content on a website is a common mistake that usually occurs on larger e-Commerce, GOV and internal Intranet sites with lots of product information and policy listings, resulting in detrimental consequences if not handled properly. Common instances of duplication can occur where filters are applied to listings or where several minor variations of a product exist. A solution to this would be to use canonical tags to point the duplicate pages to the corresponding main pages – essentially having one page gaining all of the SEO value.

9. Links from non-credible sources

Inbound links are considered the most important off-site factor in determining natural ranking success, with the quality of the link source being at the forefront. The most influential links come from authoritative websites within the same industry, with the linking page containing content relevant to the target page.

Although acquiring links from quality sites is challenging, one link from an authoritative site can have a more positive effect on rankings than a few hundred links from non-credible sources.

10. Generic Internal Anchor Text

Links Anchor text is the clickable text of any link on a page. When a website is crawled, the search engine uses  this when ascertaining the content and relevance of any associated page. Using generic anchor text such as “Click Here” when linking to internal pages is a missed SEO opportunity.

The Anchor text should include relevant keywords that the page wants to rank for but over optimising can also have a negative effect so keep the balance right.

I hope this has helped you decide how to improve your presence online, load your site quicker and just using some of these 10 tips to optimise your website will improve it immensely.

If you require help, please drop me a message and I will try and find the answer for you.

Kelvin