Today I tried out a new service by one of the smartest guys I know, Michael Geist. It’s called iOptOut and it’s a gateway for Canadians to voluntarily put themselves on do-not-call lists *before* the company contacts you, as well as giving you a legal recourse for when they call you anyways (those bastards). Within hours of signing up for the service I got 8 calls from 1-480-543-1171. Spooky coincidence.
Customer service representative indicated they worked for Fido. Trying to acquire different identification information, such as passport, drivers license, citizenship number, SIN number. Agent was rude the whole time and started asking if any of the information was fake.
They had the nerve to call us back again. Fido has confirmed they are not legitimate for selling Fido phone service. Ottawa Police (Canada) are now launching a fraud investigation. — Jeremy
(1-800 Notes is a great site for looking up the telemarketers before you give them any information — I’m glad I did)
Last night I finished reading Accelerando by Charles Stross. Like many of the books I read these days, I heard about it from another blogger. It feels like a spiritual sequel to Alvin Toffler’s Future Shock, John Brunner’s the Shockwave Rider and Warren Ellis’ Transmetropolitan. It is about information overload to the nth degree and too much change in too short of a time.
Accelerando is broken into 9 fragmented stories with decades passing in between them. This is too bad because it was the initial segment, only a few years in the future, that I found most interesting. Our protagonist is hooked up to a portable computing network of software agents that he uses to continually data mine and plug-in to a “river of news”. As he communicates with other people he spawns off parts of his “distributed brain” to research more information and get back to him.
The greatest inventions usually come from seeing the possible connection between two separate things (eg: peanut butter and chocolate). Like in the Shockwave Rider, our protagonist is successful because of his ability to gather and process information is so far beyond an average person’s. Being immersed in the information stream he sees the connections and trends that other’s can’t see.
These connections lead to so many successful ideas, that he can’t possibly execute on them himself – because the time it takes to implement them would take away from the information processing that is his true talent. He makes a career of giving away his ideas and surviving off of the reputation gain and support of his sponsors he’s made so successful. Very much like Doctorow’s concept of whuffie – reputation as currency.
The book progresses to talking about the post-human experience after digitization has reached the point that we can successfully digitally encode human personalities. Post-death society, heads in jars and living bodiless on the internet. There’s a really good bit on how the next major species will be intelligent corporations and artificial spam intelligence. But what really interested me was the initial chapters so close to the beginning 21st century: how do we use technology to deal with information overload?
(You can get a copy of Accelerando for free online – which is very useful because the copy I borrowed from the library was missing the last page – now that’s frustrating)
It’s Getting Harder to Find Information
We’re in the middle of a great revolution where anyone can become a self-publisher. But that’s the crux of the problem, isn’t it? Anyone can become a self-publisher. The low barrier to entry makes the competition for attention fierce. At some level we’re all on par with the lowliest spammers, trying to compete for other people’s attention. There is so much new content being created all the time at the only way old content stays in the public record is if the Great Google God returns it in a search result.
This is only going to get worse because Google has created a new caste of blogging serfdom. People create content and splash Google ads on it with the hope of that it will do well in Google search results so they can get paid.
There’s many a “business model” that relies completely on Google-Google Search for traffic and Google AdSense for revenue. And there’s an even larger amount of so-called business models that rely almost completely on Google for traffic, even if the money comes in via other means.
I think you know what happens to the money when the traffic stops.
I use the term “business model” above loosely, because a model that is entirely dependent on an outside company, for either traffic or revenue or both, is not really sound. You’re not in charge and you have very little control, because if Google decides to change the rules, you’re out of luck. Based on that, I would argue that relying on Google is not a business at all.
I’d say you work for Google.
From the Teaching Sells e-book
Where are the Smart Filtering Agents?
One of the things I remember clearly about the idea of intelligent agents in the early 90s was how it was going to revolutionize how we consume information. Instead of having to *gasp* pick up a newspaper, autonomous software agents would search the net finding tidbits of information what we were interested in and adapting and learning from how we interact with the results. Sci-fi books like John Varley’s Steel Beach dealt with the relationships between humans and these evolving artificial intelligences.
Take a moment to glance at the Wikipedia page on software agents; it’s quite good.
The 90s hope for intelligent agents has congealed. RSS has gotten us part of the way; now we can pick voices out of the chaos that we allow to push information to us. We can subscribe to alerts on search subjects that interest us. But aside from custom recommendation engines like Netflix and Last.FM there isn’t really a bot out there for finding information for us.
The Future: RSS Filtering
I see the fledgling baby steps of software agents delivering news. There are several sites competing for being able to filter through a list of RSS feeds and recommend the best news items to you.
There’s also the “build your own” filtering agent approach.
And let’s not forget the ability to monitor search terms.
Is the Answer Better Gatekeepers?
Is having an intelligent software agent the right approach or is it better to let humans do the filtering? The past year has seen an incredible rising in using crowdsourcing to decide what is the best information available. This is how digg, reddit, stumbleupon and the delicious popular page find interesting information by using the wisdom of mobs. Unfortunately when the user-base grows too large it becomes watered down to only common denominators.
The other approach is to find human editors to act as your gatekeeper. I’m not talking about hiring your man in Mumbai, but rather niche news sites like Slashdot, BoingBoing and Fark, and to a greater extent using the network of blogs you enjoy to act as your information gate keepers.
The last.FM music service is an amazing tool for finding new music to listen to. What makes it even stronger is its ability to find your “neighbours” – people you don’t know who have similar musical tastes. Listening to your neighbourhood radio is like having a friend who’s a DJ and always pushing new and interesting songs at you.
I don’t know any of these people, but I like their musical tastes.
Maybe instead of software agents we need software that connects us to other people who have similar interests? I read LifeHacker because I know the editors have very similar sensibilities to what I find interesting. Jon Udell shares my same love for information organization and manipulation. Jeff Atwood has perhaps one of the most engaging blogs for general geekery and love of programming, and his twitterstream is always full of interesting links.
The only downside to filtering information is that restricting your input to the people you already agree with creates a reinforcing feedback loop and destroys your patience and your ability to be around people with differing outlooks.