Technology is transforming our world quickly. But it's happening even faster than you think. For example, when a business brings a new server online, how long do you think it takes before that server gets compromised? Gets infected by a virus? A year? A month? A week?
Jeff Howe, professor at Northeastern University, has the answer. He says, "When a new corporate server comes online it will compromised in 25 minutes."
Howe continued, "The network — the internet — is such an enmeshed network, of such a dense weave, in ways that the human brain can't even conceive."
In other words, our human brains aren't all that different than they were when we sheltered in caves. But those same brains have created a world that's changing faster than we can understand or cope with.
That's why Howe, along with Joi Ito, director of MIT's Media lab, collaborated to create what they're calling a new operating system for the world. Their new book, "Whiplash," offers pro tips for retraining our brains and societies. Things like prioritizing disobedience over compliance. Seeing the world as groups of systems rather than objects. And navigating life with a compass to get you through moments of uncertainty, rather than a map that falsely projects certainty.
JOI ITO: First of all, I think there are more things coming together at the same time. So you’ve got A.I., and synthetic biology, and the internet, and Moore’s Law. So first of all, it’s hitting all together and they all accelerate and affect each other. So there are a lot of things that make this a very peculiar time
JEFF HOWE: I think it’s what Joi said. It’s not even so much — I do think that the pace of change is increasing. But it is this confluence. We’re not at it yet. But it is soon. It’s gene editing techniques. It’s artificial intelligence. And it’s just fascinating that technology can move at this really rapid pace, but it takes a long time for humans, and especially institutions, I think, to change and to innovate and to see what innovations will stick, and what do people want. There’s a lot of trial and error that has to take place.
MEGHNA CHAKRABARTI: Can you give us a couple examples of technologies right now that are outpacing our ability as a society to understand them?
JOI: So there’s CRISPR, which is the ability to edit genes very cheaply. But Kevin Esvelt in my lab is working on CRISPR gene drive, which is the ability to use CRISPR to modify an organism, like genes of a mosquito, so it’s resistant to Zika. But what CRISPR gene drive does is creates a system where the all of the offspring between this modified mosquito and any other mosquito in the wild — all of the offspring inherit this resistance. So you could feasibly deploy a modified mosquito that’s resistant to Zika, and within several generations eliminate any mosquito that could carry Zika. And so the question is, who decides? How do you engage the affected communities? And then also, how do you prevent people from doing irresponsible things? Because with CRISPR, high school kids can do it. And it’s just the beginning. Things like CRISPR will continue to come out.
JEFF: Just to give you an idea of the speed, CRISPR was discovered after we started working on the book, was perfected around the time we started composing the book and really getting into the writing. And reached high schoolers by the time we were in edits.
JOI: And one key thing as a scientist that is important is, the first deployment of a technology often affects the way the public thinks about it. If electricity had not been lighting up Paris, but had been electric chairs, people would have had a very different view of electricity at the beginning. Whereas GMOs, you know, the first deployment was really a company trying to prevent people from having freedom. So there’s a very negative view of GMOs, when in fact, GMOs can be used to do wonderful things. And so one of the things that we want to be very careful of with CRISPR gene drive is that the initial deployments are managed by the people who are affected and are done in a really responsible way. Because if they’re done in an irresponsible way, I think it would really set back our ability to do research.
MEGHNA: This is so fascinating because as you write in the book, the sort of cycle of technology in the past was this ferocious period of development, then some stasis, so we as a society were sort of able to catch up. But what you’re saying now is that the disruption is constant. The pace of change is so fast that that luxurious time we had as individuals in society to kind of figure things out, we may not have that anymore.
JOI: And I think, if you think about architects or policymakers as central planners, it used to move slowly enough that you could kind of manage things. The problem is when you get complex and you get fast, it’s kind of like the Soviet Union. You can’t centrally plan something that complex. And so what you need to do, we call this participant design. Which is that, instead of top-down design, you have to let every element in the system be responsible for its own design. And be kind of bottom up. you’re not stuck in traffic you are traffic. It’s not like the central scientist or company delivering stuff to the public. It should be everybody.
MEGHNA: So let me ask you, you talk about the internet in this chapter in quite a bit of detail, and I want to read a little section here. You say, the internet has played a key role in this, providing a way for the masses to not only make their voices heard, but to engage in the kind of discussion, deliberation, and coordination that just recently was the province of professional politics. And I think we’re living in a moment now where we’re also dealing with the potential downsides of this aspect of emergent systems. I mean, I’m in the news business so I’m thinking, everyone’s thinking about the danger of fake news. And that’s also part of different constituents having a voice in the discussion. So how do we manage the potential negative aspects of these systems as they come up?
JEFF: You know, I think there was an unfortunate sort of a techno-utopianism. That there was an embrace of technology as sort of inherently democratizing and that that was an unconditional good. Like, democracy will always be good. If you get more people involved, that’s more equitable, it’s more fair. And it’s like well, guess what? Sometimes getting more people involved means that you don’t like the product of their decisions or that it makes it easier to promulgate lie and rumor and calumny. And I think we’re seeing the effects of that. And we have to remember that technologies are neutral.
JOI: Yeah, and I think, if you were to compare the two candidates, Donald Trump had a — this may sound controversial, but I think he reflects the principles in the book better much better than Hillary did. He rode the wave of a lot of the stuff that’s happening. And, to be honest, if you look at the Twitter data from our lab, it shows that the mainstream media journalists weren’t connected to the Trump supporters’ conversations on Twitter. At all. So they weren’t talking to each other. So I think this should be a wake-up call for people who are trying to run top-down political systems, journalists who are just talking to each other. And there’s a kind of elitism that’s happened. So, again, I was thinking about science. If you think about about the original moonshot, that was, we had kids in Iowa building model rockets. This was an optimistic dream that included everyone in America. Now the moonshots are Silicon Valley moonshots, or the medical establishment in Washington, D.C., and Boston. And we kind of left a whole bunch of people behind. And so I feel like, assuming that democracy is a corrective system and that we do take this moment to be reflective, I think it’s a great reflection of exactly the whiplash we’ve created where the elite have gone ahead of the body and the body is sort of left behind.
MEGHNA: So really, as you write in the book, you’re trying to offer us all a new O.S. for the world. But underlying this, you confront a fundamental truth about, in your words in the book, our current cognitive tool set leaves us ill equipped to comprehend the profound implications. And your mission is to provide readers with tools to bring brains into the modern era. What is in our current cognitive tool set that makes us ill-equipped to deal with our present and future?
JEFF: An addiction to this illusion of certainty. The world has never been a very predictable place. But now there’s real costs associated with pretending that it is. And you used to have a better shot of it. You had your sales team, you brought them all together, you spend weeks putting together your sales forecast, and it might have a decent shot of giving you an idea of what future earnings would be. But volatility is increasing across the board, whether it’s commercial, or planning for your family, or an institution planning for its future.
JOI: It’s almost impossible to know everything and to be sure. So it’s kind of this idea of feeling okay with the fact that the place is out of control. That you go with the flow. That being prepared is much better than having a plan. There was a great John Cleese joke. How you make God laugh? Tell him your plans. So it’s kind of an anti-planning approach. But it doesn’t mean that you then just don’t do anything. You have to spend that energy that you used to spend planning and learning and knowing everything in completeness to developing an ability to know what’s going on, to be agile. I think a lot of what institutions do these days is they start out on a mission and then they get so modular and so organized and so scalable that by the time you look at them, whether you’re talking about the Humane Society or Twitter, you sort of forget why you are doing what you’re doing.
MEGHNA: So personally, as I try myself to cope with and navigate all the changes that we’re all living through, I veer wildly between techno-utopianism and techno-dystopianism, honestly. But what I appreciated about this book is it does give you a tool set about how to not fall on either extreme and chart a more sensible path forward. But I wanted to ask you, it makes me think that in terms of young people today, we’re not teaching them any of the right skills.
JOI: We’re educating kids as if they’re going to end up on top of a mountain by themselves with no cell phone and just a No. 2 pencil. In fact, you have a phone with Google and Messaging, and what kids intuitively know is they don’t have to memorize everything. That they are going to have friends who are experts about something and they’ll talk to them if they’re experts at something else. And so the notion of being always connected to people and resources, and not having to learn knowledge and skills that you can get online so that you can spend the same amount of time and brainpower for something else. And so that’s very different because now you’re learning to learn, and learning to search, rather than sort of memorizing and stocking as if you’re going to be out in the wilderness by yourself all the time.
JEFF: And to put a fine point on it, Horace Mann comes in in the 19th century and reforms American education to make it compulsory so everyone’s going. But to make people who could then go out and act like a widget. And this was a philanthropic urge. He wanted people to be able to survive and be employed. But they were going to have to become part of an assembly line to do that. So you get things like periods, and punctuality, and looking at the clock. Most people coming from agrarian environments, they didn’t look at watches or clocks. So you have to inculcate this in. The problem now is, If Horace Mann were alive, he would be like, hey dummies, drop it! It’s the opposite! If humans are going to act like machines, our machines got really good! Like, it’s the one way they’ll never be employed. So they have to become good at stuff machines can’t do. Meanwhile, our educational system, just as Joi said, is teaching them to do things that machines can do. It’s crazy!
JOI: I do think that we’re eliminating passion, which is a key driver. We have a group at the Media Lab called Lifelong Kindergarten, because we believe kindergarten is one of the best learning experiences and we should continue that, which is, we say the four Ps — project, peers, passion, play. So instead of textbooks, which, things you learn in textbooks are very hard to apply out of context, learn through projects. Peers. Learn it from each other. Kids are known to remember the things that they hear from friends more than they remember it from teachers. And in fact when you teach, you remember the best. Passion, pick the project you want to work on. So if it’s World War II history, that’s what you do and then you learn math and you learn geography. And this is, in the world of AI, you want people to have different opinions and different passions, so you gotta allow the kid to have passions. I meet a lot of kids coming into MIT, and they have all the skills and you ask them, well what do you want to do, and — I don’t know. So helping kids figure out what they want to be passionate about. And it can change over time, but always nurturing passion. The last p, play. It turns out that creativity and thinking about things that aren’t predictable — it turns out that play is the way that you get people to be creatives. And so as we think about an A.I., roboticized world, what you want kids to be is passionate, playful, collaborative, and project oriented. And right now, our schools are textbook oriented, you better not be cheating, and we don’t care whether you’re passionate and you sure shouldn’t be playing too much.
This article was originally published on January 12, 2017.
This segment aired on January 12, 2017.