Bill Dennis argues that, contrary to the common sentiment on the web, web sites have a moral right to ban deep linking. He regards it as simple respect for the property rights of others, and he rightly dismisses one of the most common arguments:
Those who ridicule companies for their opposition to deep linking say the practice only encourages more visitors, who end up browsing through the site. That argument is also used to defend posting of copyright songs and software. It doesn’t wash. Companies have the right to decide whether or not it wants to market its product by giving away free copies.
Exactly. The issue isn’t whether or not a company would be better off giving away content for promotional purposes, it’s who gets to decide. Clearly, the owners of the content should decide.
However, several of the comments point out that the technology exists for companies to have their web servers prevent deep linking automatically, without the need to resort to legal restrictions. Bill Dennis doesn’t think that’s an important consideration:
I agree that any Web site that wants to ban deep linking has technology to do so (at least for the time being). That doesn’t prevent mirror imaging. Nevertheless, failure to prevent deep linking is NOT the same thing as giving someone permission to do it.
I disagree. Failure to prevent deep linking IS the same thing as giving someone permission. In the main article, Dennis compares sites that don’t want deep linking with magazines that are wrapped in plastic to prevent people from browsing them on the rack without buying. I believe sites that don’t block deep linking are like magazines that aren’t wrapped in plastic. It’s okay to link, just like it’s okay to look.
The publishing metaphor for web content causes some confusion. The content of a web site is just files on a computer. What makes it published content, what makes it a web site, is the web server, which is a piece of software that makes the content available to other computers on the Internet. Web servers are active participants in the hyperlink mechanism, including deep linking.
In paper publishing, the printed copies are distributed to retail outlets everywhere, but on the web, copies are only sent out when people ask for them. That is, when their browsers send an HTTP request for a copy. More to the point, people only receive copies when the server grants their request. That is, when the server sends an HTTP response with the requested content.
The HTTP protocol used by web servers offers a complete, well-supported standard mechanism for controlling access to content. If I create a deep link on my site to content on another site, all I’m doing is providing some pieces of the HTTP request for that content. Browsers can take those pieces and assemble a complete HTTP request, but readers of my site can’t actually get the content unless the other site’s servers decide to honor the request. That decision is entirely a matter of how the server and web site are configured by their owner.
It’s ridiculous to allow sites to make up for their misuse of the protocols by trying to control access using legal terms and conditions. They are in the same ethical position as a man who freely, albeit perhaps foolishly, gives out $10 bills to anyone who asks. If he wants to stop losing money, the solution is not to complain about people who publicize his gullibility, but simply to stop giving out money!