As popularized by Google Page Rank, the idea of gleaning valuable information about documents (or anything) from traces of user-behavior (e.g. links, footprints, etc…) is quite the rage. Scientists constantly approach the online world with new tools to uncover or analyze traces of human behavior.
I thought that it is interesting to point out that the online world is a artificial place that was and is continually engineered and reengineered. Since web designers and others are in control of this environment, that means that, in addition to watching humans behave, we can design enviroments that lead users to behave in such a way that we learn more about what we are trying to gather data on.
For example, in a search engine, we can construct the search results page to also collect data on potentially relevant hits. For example, we could occasionally place a search result that technically has a lower score within the top ten results. This “mistake” and the related click-through data can then be used to calibrate the search engine.
Granted, to build a webpage that collects user data is not a new idea, but it would be interesting to apply that theory to online spaces that scientists have been approaching in a very naturalistic fashion.