Faceted classification, syndication, & tagging are all great ways to increase accessibility to desirable content. However, increasing accessibility has a flip side – duplicate or redundant content delivery.
Sometimes human users prefer some level of redundancy. Therefore content duplication is acceptable. When done too much, though, users become easily frustrated.
Delivering duplicate content to both search engines can significantly decrease any web document’s findability. Additionally, content duplication often decreases user confidence & negatively affects an organization’s brand.
In this workshop, you will learn, step-by-step, how to limit or prevent access to duplicate content and still deliver a positive user experience (UX).
Who Will Benefit From This Workshop
Anyone who designs, develops, and promotes websites and intranets should attend this workshop! Librarians, publishers, and information scientists will likely find this information practical & useful. Key Takeaways Include:
- How both human users & technologies “see” duplicate content
- Tools to help identify & diagnose duplication/redundancy issues
- 8 proven ways to fix duplicate content delivery
- Times when duplicate content is desirable
Downloadable materials included. Workshop also includes a Site Clinic to address your most pressing duplicate-content delivery questions.
(Don’t worry…the outline sounds technical. But I’ve been able to work out analogies with common street signs. Technical skills are not required.)
- What is duplicate content?
- Definitions & examples
- Ways to identify duplicate content
- Areas where content duplication is acceptable & desirable
- 8 steps for managing duplicate content delivery
- Information architecture/site navigation
- Robots.txt file
- Robots exclusion meta tag
- NOFOLLOW attribute
- Webmaster Tools
- XML sitemaps
- Key takeaways
- Site clinic