18 Mar 2019

EVERYTHING ABOUT DUPLICATE CONTENT YOU NEED TO KNOW
Have you applied for adsense and they gave an error that you have duplicate content on your blog.or you write ebook for kindle and amazon could not publish your book because of duplicate content. Duplicate content is one of the things google frown at on a particle website or blog. Duplicate content can lead to 3 serious seo challenges.
(1)Crawling problem
Duplicate content makes search engine to search the most important pages less frequenly.
(2) Linking problem
This happen when two urls share the same links that should have be of help to the original page.unless one of them is having a canonical link( or 301 redirect) which is pointing to the original page.
(3)Google only rank one of the pages
let say you have  two or more pages with duplicate content on your blog. search engines will only rank one out the pages with duplicate content.
 Having known  all i have mentioned above  about duplicate content, i think it is of great importance to know what duplicate content is and how to overcome such issue.

READ MORE

WHAT IS DUPLICATE CONTENT

Duplicate content is when a good amount of content within a website is similar to what can be found on another website.Hope you understand what duplicate content his now.if that is understood;let now look at how to analyze duplicate content.

TYPES OF DUPLICATE CONTENT

There are two types of duplicate content you need to know about.
1.External duplicate content:-This is content you copied from another site.
2. Internal duplicate content:-This is content that can be found on several pages of your site.

 DUPLICATE CONTENT ANALYSIS
This  analysis is all about what makes up duplicate content or what and what you need to look out for to determine if a content will be regarded as duplicate by search engine.The following are things you need to analyse to avoid duplicate content.

URL CONSTRUCTION
The way your url is constructed will determine, if your content will be regarded as duplicate or not
www.iphone.com/case2 will be regarded as a duplicate of www.iphone.com/case2
2.
PROTOCOL CONFLICT
What do i mean by protocol conflict? as you know that protocol is a digital language through which machine communicate with each other on the internet.We have 10 types of protocol:-
TCP:-TRANSMISSION CONTROL PROTOCOL
FTP:-FILE TRANSFER PROTOCOL
SMTP:-SIMPLE MAIL TRANSFER PROTOCOL
HTTP:-HYPERTEXT TRANSFER PROTOCOL
ETHERNET
TELNET
GOPHER
DNS -DOMAIN NAME SYSTEM
DHCP
 DSL - Digital subscriber line


Explain each of this protocol might not be necessary for this tutorial.what is necessary now is which of this protocols make our content duplicate when they conflict with each other.
HTTP VS HTTPS
When you have the same content on different version of your website(That is the  http:/ and the https version of your website, you already have a duplicate content.

SCRAPED OR COPIED CONTENT
Scraping or copying an article from another website is the number one way of duplicating a content.
for example: when several people copied a product information from an e-commerce sites and pasted it on their website.All those who copied such product-information already engage in duplicate content.

SOLUTION TO DUPLICATE CONTENT
 Either you are been alerted by google for duplicate content or another webmaster contact you for copying his content. Here the  three major ways for solving duplicate content issues.

(1) USING 301 RE-DIRECT
Re-direct in computer language might be a little bit  different from the way it is used in English language
Re-direct in English language means to change the direction of something.while in computer it means forwarding one URL to another URL.

301 MOVED FOREVER

NON-TECH EXPLANATION
Let say you came to look for an old  friend of yours in his formal location-and someone  just inform you that the dude you came to look for  is no more staying in that environment,and he will not be coming back to that old environment forever. You are now giving a new address, where you can find your friend.

TECH INTERPRETATION
301 redirect is a way of sending visitors and search engine to another website, different from what they typed in their browser.This redirect can also be used to link several URLs to single url for better DA (Domain Authority)

USES 301 REDIRECT
1. To bring traffic from several urls to a single url. 
when several url drive traffic to single url ,it gives the receiving url more DA(Domain Authority) which is a very important factor for seo.
2. To rename a website with a different url
what happen to a site with duplicate content

(2) USING CANONICALS TAGS
Canonical tags is the second solution to duplicate content. Is a method used to tell search engine that a specific url should be taking more important than the others.Using the canonical tags prevent problem of duplicate content appearing on multiple URLS.

Canonical tags is written in this format

<link rel="canonical" href="http://ayomites.com/wordpress/seo-plugin/" />
let say you have two urls with the same content which is regarded as duplicate content.You can used canonical tags to tell search engine which of the two url should be taking serious like so.
Url A http:// ayomites.com and Url B http:// ayomites/index.php" Then used canonical tag to specify the url to be considered for exation in the search engine.like this

<link rel="canonical" href="http://ayomites.com/wordpress/seo-plugin/" />
(3) USING  "NOINDEX, FOLLOW " TAGS
noindex, follow" ROBOT META TAGS are pieces of code that provide search engine instruction for how to craw or index web page content. There a lot of indexing controls used by webmaster, but this tutorial is only focusing on two.
1.noindex:- This tells search engine not to index a page.
2.follow:- It tell search engine to follow all the link on a page.
These two can be use to solve duplicate content issue,most especially issue with pagination.Take for example, you have a content span across different pages of your website.You can use "noindex.follow" robots text code to prevent search engine from indexing your duplicated URL like so:- Add this code to the header part of your duplicated pages.
<meta Name="robots" content = noindex, follow"> 

No comments: