Near the bottom, he comments
In fact, I would say the new user experience lends itself to futuristic forms of interaction, particularly non-touch gestures. It gives a much more literal feeling of what the system is actually doing – creating and digging through layers of association – and it’s a much more lean-back experience than Futureful was. An evolution of this idea could be great for virtual or augmented reality.
As I was reading the article, I was thinking how cool this kind of exploration app would be in an AR context, but I think I’m imagining something different than David. One of the compelling things about AR (always-on, immersive AR) is the idea of serendipitous discovery, stumbling on bits of information you didn’t expect but were somehow relevant. The difficult question, pretty much always ignored, is how the system would choose to display things for you, out of a potentially vast collection of data. Imagine walking into any high-traffic area, where there might be millions of bits of information associated with the people and place you are at.
The idea of tying discovery to a semi-random walk through keywords is really interesting. Imagine that the interface for your “discovery” channel on your AR glasses kept a collection of keywords and constantly suggested new ones (based on some algorithms like Random uses). As you move through time and space, those “new” keyword suggestions may change, and you can change your current “path” through the data at any time. So, depending on my mood (or just random coincidence) I might have keywords of the sort shown in the image above available to me as I walk into Tech Square in Atlanta (where my office is). The choice of what to show me is now significantly more constrained, and the likelihood of discovering unanticipated things is greatly increased.
Anyone want to prototype this with Argon?