Interview with a writer: Jaron Lanier

22 March 2013

9:30 AM

22 March 2013

9:30 AM

In his new book, Who Owns The Future?, computer scientist, Jaron Lanier, argues that as technology has become more advanced, so too has our dependency on information tools. Lanier believes that if we continue on our present path, where we think of computers as passive tools, instead of machines that real people create, our myopia will result in less understanding of both computers and human beings, which could cause the demise of democracy, mass unemployment, the erosion of the middle class, and social chaos.

Lanier encourages human beings to take back control of their own economic destiny by creating a society that values the work of all industries, and not just those with the fastest networks. By monetizing information, he foresees a more egalitarian society, which can still adhere to the principles of free market capitalism.

What will the future economy look like if technology keeps advancing the way it does and we do nothing?

Well, identify almost any human role in our current society, and imagine that being aggregated into a software scheme in the future where the people don’t get directly paid anymore.  We can already say that there are virtual editors of newspapers. In the future nearly every existing job will be gradually weakened because of cloud software. The only one left standing at some future date is the owner of the largest computer on the network. Whoever has the biggest computer wins in our current system.

Is this true for politics as well?

Yes. If you have the biggest computer and the biggest data, you can calculate how to target people with a political message, and have almost a guaranteed deterministic level of success. Politics then becomes about who has the biggest computer instead of what the agenda is. The way Obama won the last US election was by having the best computer strategy. That method of winning an election works, but if that is to be the future of politics, it will no longer have meaning. The path we are on is not compatible with democracy.

You say ‘It is entirely legitimate to understand that people are still needed and valuable even when the loom can run without human muscle power. [But] it is still running on human thought.’ What do you mean by this?

The reason I brought up a loom in the book is because it has appeared twice already between the history of people and machines: once in very ancient times with Aristotle — the idea of the self-operating loom — and then [in England in the 19th century] with the Luddites.

Also one of the earliest computers was an automated loom. Let us suppose in the future there is some sort of automatic loom that can just turn out clothing for you. Where does the design for the clothing come from? Somebody might say: from an artificial intelligence algorithm, running on cloud software, using big data. But this data actually comes from a large number of people who have been anonymized and disenfranchised. If there was proper counting of where the data came from we would see that even in this highly advanced hypothetical automated loom, there would be real people who make the data possible to create a design.


And you are suggesting that they get paid, right?

Yes. If there were micro payments made to the people who fed the big data — which allowed that automated loom to make the artificial clothing for you — then there would still be an economy.  It’s not as if the people have disappeared from the economy, it’s just that we pretend they don’t exist.

You describe how the barrier between ego and algorithm is unavoidable in the age of cloud software. And that ‘drawing the line between what we forfeit to calculation and what we reserve for the heroics of free will is the story of our time.’ Can you explain this in more detail?

I think a lot of the critical elements of what we call human society — particularly our economy — has to do with what we define as the proper realm of free will. So for instance, in capitalism, we make certain decisions of not trying to intervene on the function of a market place, which is a sort of inanimate, algorithmic result of the things that we do. We say instead of allowing human politics to decide everything, we are going to limit our reach, and allow this more abstract mathematical thing of the market place to sort out our affairs. So the interesting question is: where do you put the end of the human ego? And where do you let the algorithm sort out human affairs? I don’t think we should decide in advance where that line should be. Instead we should experiment to find it. In the old days before computation: this was the line between market and government.

Are there cases when giving up ego and letting algorithms play out are beneficial?

In the case of a market place, yes. But this is why it is so critical that market places can’t be corrupt and need to be honest. The problem with our cloud software right now is that it does tend to be run by the person with the biggest computer on the network, and serve certain interests more than others. It’s not an honest broker. We are constantly running into a situation where a company like Google is saying: we are being the honest broker. Of course that is ridiculous because they are a commercial concern. So in order for us to be rationally ready to cede control to some cloud software, it really does have to achieve some state of honesty. I believe that should look more like a real market place.

You criticize the culture of the tech world several times throughout this book, but you are also part of it: can you explain this paradox?

There are a lot of very positive things about the tech world. It’s remarkably unprejudiced and I’ve never encountered racism in it. There are a lot of good qualities, so I don’t want to criticize it too much. I remain in it, and I enjoy it. However, there is a smugness, or a kind of religious aspect to it. There is a sensibility that says: we have skills that other people don’t, therefore we are supermen and we deserve more. You run into this attitude, that if ordinary people cannot set their Facebook privacy settings, then they deserve what is coming to them. There is a hacker superiority complex to this.

Do you think this culture of superiority in the tech world is making society less democratic?

Well I think this culture really undermines our discipline because to me the only proper way to describe the profession of engineering is to serve people, otherwise it’s not a sensible activity. There is no rational basis without people as the beneficiaries. Just in order for me to function sensibly I need to believe in people, not robots. When we don’t put people at the centre of the world, I think we create rather bizarre technologies that don’t tend to make sense.

Describing ‘humanistic information economics’ you say everyone should be entitled to universal commercial rights when it comes to data. Can you explain how this would work?

Currently if you have a big computer then you get to keep your data secret, and you have tremendous rights, nobody can touch it, the government can’t even see it. Google’s computers, for instance, have to go through an elaborate system of legal requests. Now what I would like to see is a situation where everybody has commercial rights to data: so everyone is a first class citizen that shares the same interests with anyone that has a bigger computer.

How could this be done? 

I am advocating a certain kind role for government in this scheme: for the simple reason that if you rely on a private concern like a Facebook or a Google to own your personal identity in the world for you, it makes you particularly vulnerable. Primarily because companies die over time, and they also go through periods of corruption and dysfunction. So we cannot have [so called] too-big-to-fail-digital companies. People must have some self-determination, and some social mobility, independently of whether some company is failing or not. Otherwise you cannot have an authentic market, and you cannot have real capitalism.

Many people — particularly in the US — are suspicious of government: do you think they will be convinced of your argument?

People think the idea of a big company owning your digital identity is better than the government, but actually it’s just the opposite: it’s much more market friendly to have the government cover those basics, because it creates continuity and lessens the dependency. There has to be a government function for some of the basics in digital identity. I don’t see any way around that.

Subscribe to The Spectator today for a quality of argument not found in any other publication. Get more Spectator for less – just £12 for 12 issues.

Show comments
  • William Presenod

    This is almost like a “Catch 22.” On one hand, you want technology to grow to make life easier, but on the other, it can take away lots of jobs. I have personally seen this happen in multiple industries!

  • Muhammad Lal

    thank you for sharing such nice infoto us. i like it its very informative one keep sharing this type of information to keep in touch with the people.

  • disqus_4WokvTow4b

    If you are interested how the economics should be, I recommend you to check out Jacque Fresco’s Venus Project and his ideas.

  • nik_the_heratik

    What a ridiculous premise for an article, let alone a book. Having lots of data can be an advantage, but so can having oil or another important natural resource available. But having better computers is not important as the quality of the people who are using them, both in terms of their ideas and in terms of the culture of the organization they are a part of. And I too, am a Computer Scientist, so I know what I’m talking about here.

    You can have a group of the smartest guys in the room, like Enron, and they could have every technological advantage and still completely trash the company because of the way they run things. The same is true, to a lesser extent, with Microsoft, or Atari, or the Soviet Union in the 60s and 70s, or Germany in the late 30s and early 40s.

    If people don’t trust your company, and what it does with their data, then the company could be gone just as fast as Enron or MCI or any other huge business. And anyone running the company with half a brain knows this already.

  • Daniel Warren DuPre

    If everyone owns their own data, then everyone has the right to edit their own data. And everyone will edit to their own advantage. Hence personal data will be corrupt, therefore useless. Your data, my data, everybody’s data is only valuable as long as it is controlled and evaluated by an impartial external referee. Be it Google or Government

  • BobI

    Yet another illustration of myopic technothinking. If this guy climbed out of his box, then stood back, then stood back some more, he might catch a glimpse of history. Then it might occur to him that, after his own short 5minutes on this planet, he’s not remotely qualified to be positing endgames.

  • Pulseguy

    Obama didn’t win because of his computer strategy. He won because there was a weird media conspiracy to make sure he won. I’ve never seen in 40+ years of political watching such a media silence about his errors, and such a hyping of the other guy’s. Think of the time the media spent on Fast and Furious, the economy, Benghazi, and think of the time the media spent on Mitt’s dog from when he was 25.

    • erniebornheimer

      Honestly, most people care more about Mitt’s character (not to mention his policies) than that other stuff you mention. Including me. Except the economy, which reasonable people can disagree about.

  • Urthman

    The facts that Obama won and that Obama had a better computer system do not demonstrate that Obama won because he had a better computer system.

    • Silas Barta

      I tend to agree. Romney shot himself in the foot way too many times for a better computer system to make a difference.

      • Christopher Burd

        Obama ended up with a 4 percent margin – not razor-thin, but less than you’d assume if you hang out in Twitter, say. Could his superior computer system have swung the necessary 2 percent of votes from Romney’s column to his? Who knows… but not entirely implausible.

  • Russell Seitz

    Mayor Bloomberg’s ascent to petty tyranny should give pause to those cheering on the technorati. Those who think they invented the internet could easily reinvent communitarianism as the socialism of creeps.

  • Daniel Maris

    Someone suggested here that we commenters on the posted articles are unpaid workers for the Spectator, ensuring they get more advertising revenue by our just being here and also by attracting lots of lurkers who read our comments. Nice to think we’re useful if that’s the case! We’re the entertainment!!

    • Eddie

      A surprising number of journalists – or those who contribute freelance features articles – do not get paid at all: those travel features you read in the local paper are often donated for free (by vain readers wanting to see their names in print).
      I used to submit free articles to a local lifestyle mag as a favour, expecting it to be be returned one day. Well, when it came to the owner of the local lifestyle mag doing a favour for me, she sent me a bill for £100 + VAT. I told her to shove it up her fundamental with her scruples!
      My advice: don’t give it away. Someone is making a profit and if it ain;t you – then why?

  • ramesh rghuvanshi

    My question is how can stop advance of science and technology?Man is curious and his spirit could not stand still,.If he remain standstill he will die within some time..I know there is paradox in our nature..we want to move advance and exhausted by constant moving.How to solve this paradox is real grave problem mankind is facing.

    • Booyah987

      I don’t think anyone is calling for anyone to stop the advance of science and technology, but we need to recall that science and technology is a human endeavor. We should question the possible contentions of our actions as we go along, and we should drive technology to serve the common good of the people–not simply a small group. We have to be careful these days because we’ve just empowered and gave tons of money to a previously disenfranchised group of nerds (especially after Democrats used them to secure their POTUS victory; think they are just going to nap for the next four years?) who have a very different idea of what the future should look like than perhaps the common person. Consumers tend just to see technology as a series of shiny new things to quickly acquire for bragging rights within their peer group, but a lot of what’s going out there is a calculated attempt to manipulate the future mores, values, policies of the the future society.

      • Daniel Maris

        I think the way things are going points up the need for effective world government soon (which, incidentally, is not the same thing as a single world sovereignty). People have already posted designs for 3D printers to print guns for instance. We might have very serious cause for concern if someone could put out a design for the 3D printing of anthrax or similar.

        I have no wish to trammel our creative spirit. I am all in favour for instance of us backing Elon Musk’s desire to get us off this planet, exploring and settling the solar system, starting with Mars (which has the potential to be turned into a very nice home for humans).

        However, there are clear dangers ahead so we ought to slow down rather than continue with our practice to date, which is akin to taking the corners of an alpine road at 50 mph.

        There ought to be an international system for proper review of scientific and technological advances so that concerns about these can be properly evaluated and control measures put in place where necessary.

      • pearlsandoysters

        Up to the point! I guess that the general public actually sleepwalks into new modes of thinking and is being ushered into the new bright world that’s being engineered behind the closed digital doors. On the plus side we still have some time to re-consider what’s going on.

  • erniebornheimer

    Brilliant! This is the first time I’ve read anything along these lines. Lanier is absolutely right, to the extent that we assume (as he does) that we should preserve capitalism. But lots of us don’t buy that assumption. We either have to monetize everything as Lanier suggest (and make micropayments easy to the point of being automatic), or we should de-commodify and de-monetize everything to the maximum extent possible.

    • Daniel Maris

      We will probably go both ways at once! I can see the scope for homes becoming not just shelters and places for human intimacy but also mini factories and farms. This has started already with photovoltaic panelling with can give homes an independent source of power. We have already seen homes become to some extent warehouses, with large freezers enabling us to store food for long periods.

      We already have the ability to paper print in the home and most of us can now use computers as mini offices.

      3D printers will give us the ability to manufacture many useful objects in the home: spare parts, kitchen tools, DIY tools, toys for the kids.

      I can foresee also great potential for growing fresh food at home -perhaps beginning with salad and fruit. The time may soon come when we have the equivalent of a freezer cabinet in our homes producing a range of fresh vegetables, fruits, and pulses using hydrponic methods and artificial lighting.

      We may also be able to extract water from the atmosphere (it can already be done but is expensive because energy is expensive), and so disconnect from the water mains, saving ourselves hundreds of pounds a year. Waste water could be evaporated off.

      If Low Energy Nuclear Reaction (Cold fusion) technology can be made a reality, we may be able to produce all the energy we need cheaply in the home. Then we can go completely off grid, with sewage being sanitised/turned into fertiliser for the home food units.

      Waste could be pyrolysised in the home, leaving a small amount of inert waste that we could take ourselves to a local waste facility.

      The home could become a complete “life support base”.