Edward Tenner: 2018 National Book Festival

Edward Tenner: 2018 National Book Festival

October 9, 2019 0 By Luis Garrison


>>Anne-Imelda Radice:
Good afternoon. I’m Anne-Imelda Radice, and I am
a senior advisor to the chairman and director of the Division
of Public Programs at the NEH, the National Endowment
for the Humanities, which is the sponsor
of this room. I am so pleased to
welcome some amazing voices that have been heard
all day long. I wish we could just
do this all the time. [ Applause ] I was going to proudly say,
thinking that I would be in a special position, this
is my 16th Book Festival. However, the woman who
was our stage manager and directs this room, this
is her 18th, so I am humbled. [ Laughter ] [ Applause ] We are in for a treat. Professor Tenner’s book,
The Efficiency Paradox: What Big Data Can’t Do, absolutely was a
feel-good book for me. Now in business school,
there are all kinds of books and papers and things you’ve
got to absorb that tell you how to do things in a better way. The word efficiency is on every
other page, and it’s something that is intimidating but you
feel is absolutely essential if you’re going to be a success. Doctor Tenner is a very
respected professor. At the moment, he’s at the
Lemelson Center for the Study of Invention and Innovation
at the Smithsonian. He has an AB from Princeton
and a PhD from Chicago. His writings, thank
heavens, are everywhere — the New York Times,
the Washington Post, the Wall Street Journal,
the Atlantic Monthly, Edward Tenner: cetera. Two other books that I
would recommend you read that he has done are
Techspeak, which is 1986, and then one that’s
really, really amazing, Why Things Bite Back:
Technology and the Revenge of Unintentional Consequences. In fact, he’s going
to be talking about unintentional
consequences, I’m sure, with us today. He’s going to be exploring
fields like media and culture, education, transportation,
and medicine, and I had to hold myself back
from quoting from this book because I want him to
tell you all the things that just blew my mind away,
but I have to say two things. He has made me feel so much
better that I write on paper as opposed to do things
through a computer all the time and that — [ Applause ] Yes, there are a few of us left, and he thinks it’s
important even if you’re a technology person
to do that, and second of all, I no longer feel guilty when I
get into an Uber and I get upset with the driver who wants
to follow the GPS instead of being logical and
following directions, so you have no idea
how you’ve lifted that pall from my shoulders. In a way, the professor, through
his wisdom and his research and I think a great
sense of humor as well, allows us to stop
and smell the roses. So Professor. [ Applause ]>>Edward Tenner: Thank you for
that wonderful introduction, and thanks to the Library
of Congress, and especially to the National Endowment
for the Humanities in making this possible. It’s really quite
an honor to be here. I should start with
a confession. I am not a professor, but
I play one on YouTube, and I’ve been an independent
writer for over 25 years now, and I’ve really loved it ever
since the employment crisis of the 1970s happened
after I received my PhD, and I went from the fast
track to the scenic route, and I haven’t looked back. So I’d like to introduce you
to some of the ideas in my book and to show you why I
have a half enthusiasm and half equivocation
about big data. Are there any Chicagoans here? Could you show the
slide, please? Yes. So I guess it’s
not really behind me. The people in the center will
have to look at either side, but that building — does
anybody recognize that building? So I don’t think we
have Chicagoans here, but that building is one of the most interesting
buildings in the United States. This is the former Ruben
R. Donnelley Calumet Plant that was built from 1912 to the
late 1920s, and it was once one of the largest if the not
largest printing plant in the world, and here is
where Sears, Roebuck catalogs and Bell System telephone
directories, Life Magazine, Time Magazine, they all
came off presses that worked with massive rolls of
paper, and this was part of what I call the
continuous process economy, and today this building is one
of the largest server farms of the world, and it is used to
route all kinds of information and transactions on the web, and
so the fate of this building, which was very well-equipped
with extra reinforced floors and with wonderful
electrical service — the fate of this building shows
what’s happened to our society. We’ve gone from a
continuous production society to a platform society,
and the companies that dominate our attention
now are no longer AT and T and especially not Sears,
Roebuck but companies like Amazon, Google,
social media companies, and the streaming companies. People have written a lot
about this transition, whether it’s good or it’s
bad, its effect on inequality, gender, all kinds
of other things. My book, though, is different from other critiques
of Silicon Valley. My book is not about
all of the things that Silicon Valley
doesn’t really care about. It’s about what really
matters to Silicon Valley, which is efficiency, and so
my argument is that pursuit of too much efficiency in the short run can make us
less efficient in the long run, that efficiency is
good up to a point, but it can undermine itself. Let me give you a story
about my experience of this. I use a program called Waze
that is now sold by Google. It’s actually not sold, it’s
free, and/or it’s in return for advertising that’s really
impossible to look at even when you’re stopped, but
who can object to this? In fact, even starting out as
a skeptic of Waze, I felt I had to try it, and I’ve
now worked myself up to their highest category. I am Waze Royalty, so if
you see a little crown — if you use Waze and you see a
little crown, that might be me. But I discovered something using
Waze to visit, at long last, the Thomas Edison Historic
Site in northern New Jersey, which is really almost
impossible to navigate without using a GPS, and it was
really great getting up there, but on my way back
after admiring all of the wonderful laboratories
that Edison established and that are now so beautifully
preserved, Waze told me to go north when I
should be going south. And fortunately, as the author
of a book praising the value of intuition against total
reliance on big data, I was kind of primed for this, and I disregarded
the instructions, and I headed south, and
I arrived safely at home. So this was a lesson
to me in the value of being a technology
writer and being able to follow my own advice. So my book is really not really
another one of the attacks on Silicon Valley, but
it’s really an attempt to go beyond utopianism
and dystopianism. It’s an attempt to be a realist, and I’d like to give you
a half-dozen problems of a platform society that
this book calls attention to. The first is asymmetric
knowledge. Now that just means that the
platform companies know an awful lot more about us than
we know about them. They have so much
information from our data, they know so much more
about our behavior, that it is actually difficult to
criticize them, even for a lot of academics who get very
selective access to their data. This makes it hard to legislate. It makes it hard to regulate. And congressional investigations
might mean we’ll learn more, but the thing to keep in mind is that we don’t really know very
much about the way they work and we, for example, still
don’t know very much about what, if any, influence they really
had on the election of 2016. The second problem is the
importance of tacit knowledge. This is one of the
most underrated factors in intelligence, and
yet it’s ubiquitous. We saw that when Watson
was first demonstrated in playing Jeopardy,
and Watson got a lot of difficult questions right,
more than human champions, but it also got a
very simple one wrong because it really didn’t
have a robust understanding of the world. It had multiple processes
working on different answers, and then it would poll
each other and kind of reach a majority vote, but
that isn’t how our minds work. I’ll give you an example
from everyday life. Take everyday proverbs. Take “a stitch in time
saves nine,” or my favorite as a kid was “a rolling
stone gathers no moss.” Now what I wondered about is
what does that really mean? Does that mean that it is
good to be a rolling stone because moss is a kind
of undesirable accretion, and if you keep going, you will
not get this growing on you, or is moss something good? Is moss a stand-in for money? Does that mean if you kind
of stay where you are, the money will kind
of grow on itself? So I still haven’t figured
that out, but my point is that if you give a child an
unfamiliar proverb from anywhere in the world, the
chances are that child with no previous exposure to
it will have an understanding of what this metaphor is about. We are metaphorical beings. Our art, our literature
is based on that. I remember hearing once a great
philosopher asking a great literary critic who was
expounding a particularly difficult poem. He said, tell me then
who was this fellow who can’t express
himself clearly? And so that ambiguity is so much
of a part of the texture of life that we take it for granted,
and yet that’s one of the things that artificial intelligence has
so much trouble dealing with. Then there are false positives. I was at the Smithsonian
Gallery of American Art, the Smithsonian Museum of
American Art this afternoon, and I saw a wonderful exhibition
which I urge you all to see of Trevor Paglen, who is an
artist who is exploring all of the mysteries of the military-industrial
surveillance complex. It was really a fantastic
exhibition, and one of the features of it
was a film of a string quartet as interpreted by software
that recognizes, supposedly, the genders and age and other
particulars of the players, and as I was watching
it, I was waiting for this really very disturbing
knowledge, this ability of the software to
identify people, and I was actually
somewhat relieved when I saw that the software said of one female violinist this was
a woman holding a baseball bat. So there might be
some risk in this. After all, you know, a
violinist might be arrested as a potential terrorist
or something. It could be a serious problem,
but the serious point of it, though, is that while efficiency
can work 99% of the time, especially in medical cases,
if it malfunctions just 1% of the time, the problems of
dealing with that, the human and material cost of
dealing with false positives, can very easily offset
all of the gains from the 99% efficiency. So when you hear about
high rates of efficiency, you also have to ask,
well, what is the cost of the false positives? What is the cost of the
results that have to be cleared or investigated at
great expense? There’s also what social
psychologists call competitor neglect, which is a
big factor in business. So very often, people using
their big data will have a strategy, but what they may not
realize is the other side has big data, too. So we have a situation
now where more and more organizations are
competing with each other, and each of them believes
that it has the answer through its own use of
data, and yet when you look at that competitive world,
sometimes when those two users of big data are interacting,
the result can be something that is actually foreseen
by neither of them, and this has been
happening in baseball. There was a column by
Jared Diamond recently in the Wall Street Journal
about how the rule changes in baseball have been producing
differences in the game and, well, I can’t go into that
in detail, that have resulted in lower attendance and thus
lower revenue, lower interest by younger fans, and
this is all as a result of the Moneyball idea that
any team can use these tools and somehow become a champion. Well, there’s always an
advantage of the first adopter, but once everybody has it,
it turns out to be different. The fifth of the
issues that I talk about in The Efficiency
Paradox is Campbell’s law. Campbell’s Law is the
Heisenberg uncertainty principle of social science. Campbell’s law is the
idea that once you start to measure something and
give people incentives to change their behavior to
optimize what is being measured, that behavior may
change in unwelcome ways. You may get results
that are good for the people being measured but not necessarily
delivering what they’re supposed to be measuring. We’ve seen that a
lot in business. In the 1980s, critics would
say of corporate executives that they were bureaucrats
that were being paid not for performance, so we
need to measure the profits that they’re generating
and pay them accordingly, but of course these executives, once they had these
compensation schemes, found ways to maximize their
profits in the short term with accounting techniques
that put off the results for their successors,
and we’ve been living with the results of that. And finally and most seriously
for me, there’s a trend that I call counter-serendipity. We’ve all known the advantages
of accidental discovery. We’ve all been someplace as a
result of some kind of error, something unexpected that
has brought us into places or acquainted us with people that we’ve been very
glad to know. I certainly have. And the problem with
too much efficiency is that by following
existing patterns, which artificial intelligence
can do very well, it can get us into a groove that is
hard to escape from. It can institutionalize
the status quo. Far from being a
radical departure, pattern recognition can
be ultra-conservative, and it can keep us from
the kinds of breakthroughs that have really enriched
our lives over the centuries. Think, for example, of
what traditionally made for a successful
career in the Navy. Well, Hyman Rickover was a
very different kind of person. He did not correspond
to the pattern of successful Navy officers, but if you want an
atomic submarine invented, you need a different
kind of person. So even though he had great
difficulties in his career because of his background, he was able to contribute
something, and there is a Hyman
Rickover Hall now at Annapolis that reflects that. But it isn’t only the military. Think of how many brilliant
breakthrough books did not conform to what publishers
and their editors or even most readers initially
believed a book should be and became future
bestsellers and classics. Moby Dick was a huge
commercial failure at first. People really wondered what had
gotten into Herman Melville, and it’s very well-known that
Harry Potter was turned down by at least 20 publishers. So something that’s radical,
something that’s new, something that breaks with
these established ways of doing things can often
have a difficult time, and yet if we’re not careful, artificial intelligence can
institutionalize a certain kind of stagnation. So what am I suggesting? I think that we should actually
learn from Silicon Valley, and when I wrote the book, one of my most interesting
surprises was how many Silicon Valley people really had a
secret analog side to them. For example, many parents
send their children to schools on the principles of a European
movement called anthroposophy, in which the emphasis is
on art and spirituality and technology is taught
very late, if at all. And we can also learn that some of the most important
decisions are really those made from instinct rather than
from a rational evaluation of all factors, and Jeff
Bezos himself made this point. People asked him about
how he established Amazon, and he said that he
went over it and he saw that the odds were against him. It looked like a losing bet,
but he saw that in the long run, he would regret it
if he lived to be 90 and wondered what
it would have been like if he had taken the
plunge to the web, and his boss at the hedge fund where
he was working tried to talk him out of it. He did not seem to
be interested in it. There is no evidence
that he ever invested in. He was the more rational one. He was the numbers
person, and yet he lost out on what was clearly
the greatest chance of his career if this was so. So in conclusion, I would say that I’m not sure
what I can contribute to understanding the world
around us, which is the theme of this series of lectures,
but I really was impressed with the idea of one of my
contemporaries at Harvard who is now a professor of
law at Yale, Robert Gordon, and he defined an
intellectual in a way that I would make my own credo. He said an intellectual is
somebody who believes that one, the world is run by fools and
two, I could do no better. Thank you very much. [ Applause ]>>Anne-Imelda Radice: Please
step up to the microphone if you’d like to
ask some questions.>>Hello. My name is Maria Avera and I attend Carnegie
Mellon University, and I’m doing my masters in
public policy and management. I’m right here. And my question is, what
do you think about the idea of using the Moneyball
method for public policy, and what would be
its implications?>>Edward Tenner: I’m sorry. The method?>>Yeah, the Moneyball.>>Edward Tenner:
Moneyball method?>>Yeah, for public policy.>>Edward Tenner:
For public policy? Well, I’m not sure how Moneyball for public policy would
work in specific cases. Moneyball, at least
initially, was very useful in finding people who did not
perform that well according to the customary measures
of evaluating people. It was finding more
sophisticated ones. So I would suppose
that you could try to use Moneyball techniques
in assessing candidates or assessing people who are job
candidates in the public sector. It’s harder to apply
Moneyball directly to policies because there is no historical
database on what policies work and what policies don’t, so
the air has changed so much that even if you compile
that, you could say that well, a policy that was
really successful when the Calumet
Plant was turning out phone books would not
necessarily be successful when they were facilitating
transactions online.>>Okay. Thank you.>>Edward Tenner: Thank you.>>Okay. You were in
line before I was.>>All right. Thank you so much for
taking your time to be here. My name is Noah and
I’m a rising senior at American University
here in D.C., and one of my biggest insights
from your lecture today was about the fact that a false
positive could just destroy the efficiency of an entire
system, and I think ironically, based on, you know, all the
readings that I’ve read, there isn’t really an
institutionalized way of creating efficient
— or, you know, creating like a resistance
to false positives. So do you think — how
could we solve that?>>Edward Tenner: Well, there
is a psychologist named Gerd Gigerenzer, G-I-G-E-R-E-N-Z-E-R, who has received less
attention than Daniel Kahneman. He is not the winner
of a Nobel Prize. I have great respect for
Kahneman, by the way, but I think that Kahneman’s
ideas have sometimes been applied in a one-sided way, and so Gigerenzer is a very good
counterweight, and Gigerenzer in his works has some
very concrete suggestions about statistical techniques
for discounting false positives and for reaching decisions that
are more intuitively grounded. One major branch of statistics,
you know, Bayesian statistics, has intuition, your
prior judgment of how likely something is, as
actually one of its foundations. So there is a tendency sometimes
to use Bayesian statistics as a panacea for everything. I am not a Bayesian
in that sense, but I think you’ll find once
you look into Gigerenzer’s work that there are some very
concrete suggestions about dealing with the
problems of false positives.>>Thank you so much.>>So a few years ago, I
started noticing that a couple of neighborhoods had speed
bumps starting to show up in their roads, and
I’m a civil engineer, and I was kind of, you know,
wondering why that was. I wasn’t using Google
Maps at the time. Then I started using
Google Maps, and I was in a different city, one that I hadn’t
driven through, and so it was directing
me on these roads to get to where I had to go,
and all of a sudden, I end up in this neighborhood,
and I’m hitting what appear to be newly constructed
speed bumps. So it became pretty clear
to me that what was going on was this was the
neighborhood’s reaction to suddenly being turned into
an efficient route by big data. So you know, as civil engineers,
roads are designed, you know, interstates, you know,
state highway, you know, and you have big
buffers, things like that, and so one of the goals
is actually to keep all that traffic off of those
little side streets, not running through little
neighborhoods with little kids and stuff playing in front
yards, and keep as much of the traffic as possible
elsewhere, but time is money, and as a result, it appeared that everything had
been collapsed down to simple you can save 20
seconds of time on your route by doing this, which is not
really what the urban planners were intending. So anyway, it’s just
an observation about how big data is
starting to drive that type of decision-making, you
know, based on what I can — as far as I can tell, maybe
like 20-second increments.>>Edward Tenner: Yeah. I mean I have followed the
story of Google Maps and Waze and the battles of neighborhoods
to deal with the traffic. In New Jersey, there have been
towns that have posted signs, had, you know, police
there to ticket people who were not there, but it turns
out because of this interaction between the technology and the
unexpected reactions of people, it is really impossible to
predict the actual behavior that will result from a
technological innovation. Thank you.>>I guess I would just
like to get your insight and what your opinion would be. If you look at the production of
food using agricultural methods, so to speak, going from using
pesticides to now going organic, I guess the equivalence of
that would be being aware of how the systems change and
trying to be mindful of that, because I see that a
lot in Metro stations, where I’m a software developer, so I’m on the computer all day
— I’m one of those people, by the way — but the last
thing I want to do when I get out of work is look at a screen, and I’ll be at the
Metro station, and the only thing
you see is this. I mean it’s literally like
mind-drooling zombies. No offense to anybody, but do
you think there will be some — I guess that’s the
only question I have. Do you there will be some
sort of awareness movement when the time comes
due to understand that there are implications
to this kind of behavior?>>Edward Tenner: You mean
for the overuse of screens?>>Yes. Well, technology,
so to speak. Big data usage.>>Edward Tenner: Yeah. Well, I think one surprise
has been that people in general are not as ready
to give up analog experience as the Silicon Valley and
many journalists have said. For example, at one point,
Jeff Bezos was talking about Kindle replacing printing
books entirely, and in fact, there’s been a mild
resurgence of printed books. Now the publishing
industry has other problems, but there hasn’t been a mass
exodus, and one reason for that, as I document in The
Efficiency Paradox, is it turns out that
young people, far from being great enthusiasts
for electronic textbooks, very strongly prefer
the print textbooks, and the textbook publishers
would much rather have electronic-only, because it
could be more much profitable. The book could go dead after
the subscription period and so forth. There are many, you know, many,
many more things you can do to increase your gross margin
with an electronic copy, but the point is that
even among people who are so-called
digital natives, there is an instinctive
realization that for a lot of things analog is better. The sales of pencils, for
example, have continued to rise decades after decades
and decades of computing because there are many, many things for which it is
simply better to use a pencil and paper than to try to convert
things to electronic form. On the other hand, there
are so many advantages to electronic writing. I was a very early convert. So to me, we don’t
have to choose. That’s one of the big
points of my book. We don’t have to choose
between analog or digital, and my emphasis is really
on looking objectively on what each technology
is good for and using it for what it’s good
for rather than trying to make everything an
ideological discussion. I mean that’s an obvious point,
and what I said in my preface and what reviewers have
pointed out that I’ve said is that this is all obvious
once you’ve read it. So L. Ron Hubbard said the only
way for a writer to get rich is to start a religion, so
sometimes I’ve been tempted. Sometimes I’ve been tempted to start a new faith called
obviology [phonetic spelling] and to take the advantage
of all the religious liberty that people have been
talking about, so stay tuned. I may return in a
new incarnation.>>Thank you.>>Edward Tenner: Thanks.>>Very insightful.>>Hi. Thank you for
an interesting talk. I’m a software engineer
by profession, and my question is a
reaction to what you said, that you don’t wish your talk to
be a critique of Silicon Valley. I was reminded of what one
of the founders of the AI lab in MIT, Joseph Weizenbaum,
wrote 30 years ago, a book called Computer
Power and Human Reason, and he wrote his disillusionment
with computers arose out of the fact that he wrote
it a toy AI program called ELIZA which would do some natural
language interaction with people and would do a very rudimentary
analysis of medical problems, and that simple toy
program seduced thousands of his customers into believing
that he was a real doctor, and he had to attend to their
questions every day about all — they had questions,
medical questions — thinking that the toy program
would give the solution. So my question to you is,
in a very general sense — I mean he believed that computers have not really
fundamentally changed society for the better, and you
see a reaction these days to all this gadgetry that is
being pushed by Silicon Valley, smartphones, 24 by
seven total, you know, involvement with
electronic devices. So my general question to
you is, in the overall scheme of things, is Silicon Valley
really doing a good job?>>Edward Tenner: Has
there been progress? Is that what you’re asking? Well, I think, no, I would
like to put it another way. I would like to say,
rather than asking, well, what is the balance so
far — what I would like, the way I would like to put it, is what is the prospect
for the future? What will be the best
way to use technology? Because I don’t think that
anything is inevitable about how technology applies. I believe that you
and I and all of us in this room really are the
people, along with millions or billions of other people, who determine what
technology does, what it means. Our decisions can make
it absolutely terrible. Our decisions also can use
it to do a lot of good, to do real good in education
instead of being a fad in education, to make real
improvements in health, although I document
how the misapplication of it has resulted in actually
more paperwork for doctors. So I’m trying not to say that
there’s anything inherent in the technology that makes
it one way or the other, and the great quotation
was from Mel Kranzberg, one of the founders of
the history of technology, and he said that
technology is not good or bad, nor is it neutral. So it’s really what
we make of it.>>Thank you. [ Inaudible Question ]>>Edward Tenner: Yeah.>>Small question, please. It’s more like a request. As an ordinary person, you
know, as far as I know, I am nobody and, you know, so
I’m sure there are lots and lots of us around the world as
that category, you know, which we belong to that
category rather than, let’s see, I’m your daughter, because
you’re so tech-savvy, and you know what is going on
and what’s so analog and what’s so digital — that
sort of stuff. So from my understanding, I
know there’s not much to speak about our time in terms of,
you know, explaining my psyche or what’s going on behind
my head or my motivation, but I feel that I’m compared
to — I’m an immigrant, also Tibetan, from India, and we
are political refugees in India, and Silicon Valley of
India is Bangladesh. So I belong to that state. You know, I’m from a village
five hours from the city, but that is where some of
the software, you know, bring producers down there. So therefore, compared
to other immigrants, I feel I’m more fortunate
kind of background, possibly. You know, even if I don’t know,
I might have a lot of help all around the world, sort of, but
then there are so many others, and it’s so heartbreaking,
and I know I have thousands of questions in my head. So how should we proceed if it
is too late or it’s too foolish to be struggling so hard? Why don’t you get it? And this is how the
world is organized, and this is how it has been
going on since the 1960s or the Cold War or this,
blah, blah, blah, you know, and probably more than a
hundred years — who knows — or maybe within the last ten
years or, you know, as you say, Amazon and Jeff Bezos
and all that, you know, taking over Washington Post,
and so how should we think and function to be safe
and also, you know, if it is still not too late, and if I believe the
way I believe that, it looks like we are
still kind of part of the normal so-called — you know, I’m a very
religious person — so that there’s still
humanity left, that, you know, if we walk the right walk
and talk the right talk, we should be pretty safe, or
else this is what is the reality and you should not
fight but rather respect and also seek help and not
challenging and all that. If you do that, you see that
that’s how it can be done, and it’s one second,
or it’s two second, and the whole D.C.
could be wiped out or the whole American
could be wiped out or the whole world
could be gone, so therefore you’d
better behave. Do not disrespect. Do not be racially biased,
this, that, you know, just know their power
or capability. How are we supposed
to, you know — please, if you haven’t thought
about it, which I find it so funny, because as educated
and as so forward thinkers as you guys, it looks like,
you know, not much focus on the ordinary stuff. Rather you are too immersed in
the very, very specific thing, you know, which is so great,
but okay, don’t you feel like, you know, you are neglecting us? And so therefore, how
should we perceive, sir, that there’s still humanity
left, as in, you know, I’m a Tibetan, or you’re
Chinese, or you’re Nepalese, or you’re Bangladeshi,
or there’s no such thing? Just get on with it, you know?>>Edward Tenner: Yeah.>>Thank you so much.>>Edward Tenner: Thank you. Thank you. My book really does not try to answer ultimate
questions, so it’s really –>>Yes, please do, and let
others do, please, you know, so that we can get more
education to people.>>Edward Tenner: Yeah.>>I’m kind of a little
bit fortunate one.>>Edward Tenner: Thank you. Thanks for your remarks.>>Thank you.>>Edward Tenner: I can’t really
answer the deepest questions. I stick to the obvious. So thank you.>>Great. Thanks. So I really appreciated
what you said before about technology being neither
good nor evil nor neutral, and I’m just wondering
what your thoughts are on engaging policy makers who
may or may not know the details of any particular new emerging
technology and how these kind of people who are making
important decisions can balance precaution with taking
advantage of improved efficiency and opportunities that
new technologies present.>>Edward Tenner:
That’s a great question. When I was an editor at
Princeton University Press, I would always cringe
when somebody had a book that was intended
for policy makers because the policy makers, I’m
not sure if they read books, but they certainly
don’t buy them.>>Certainly not that
kind of book, right?>>Edward Tenner: It was
not a very favorable thing when I heard about it, and
I generally don’t really — I generally don’t write
about government policy, and one reason for
that is that so often, policy has had unintended
consequences. For example, in Ellen Brant’s
book Cigarette Century, which is a wonderful account
of the unintended consequences of an automated cigarette
rolling machine that was patented about 1880
for the rise of world deaths in smoking, this was probably
the most lethal invention of all time, greater,
probably, than nuclear weapons. And so the problem
that I’ve seen is that very often these
policy changes — for example, breaking up James
B. Duke’s American Tobacco. Duke was the one who really
worked with the inventor, Bonsack, to turn
cigarette-making from a hand-rolling operation
like cigars to something where machines could
produce, you know, millions of cigarettes a day,
and the federal government, they didn’t object to smoking. They just thought that
Duke was really hogging it and was abusing the
monopoly of the machine, and so we had American
Tobacco broken up just as Standard Oil was broken up,
and the result was that instead of a tobacco monopoly,
you had an oligarchy that was even harder
to dislodge. It took many years. It took decades until the
surgeon general’s report. So my problem with policy is that always there are these
unintended consequences, and so my emphasis is on helping
readers to understand policy for themselves, and possibly,
if there’s enough influence, if there are enough
people, then I might be able to do a little bit to
help move the culture.>>Thank you very much.>>Edward Tenner: Thank you.>>I wanted you to put
together your research on Waze a little bit and
some of your conclusions. I have two observations
that I want. One is that when I don’t
use Waze, it’s hurt my life. I’m a cyclist who
goes to beach towns, and Waze is now routing
car traffic through the less traveled
roads that I used to enjoy. And two is it’ll often
in a rural area put you on one very small route,
but they put a lot of people on the same route, and then
the whole thing kind of breaks down if there’s a
traffic problem. So just broadly, if you
can say a couple of things about what you’re finding, and
more broadly, the implications of other — you alluded
to certain key findings, and I thought you were sort
of going to maybe mention where else you’ve seen what
you see happening with Waze.>>Edward Tenner: Thanks. I wish I had more time, but the
signal is that the time is up, so I leave it to individuals
to experiment with Waze. Really, it’s free. It’s worth trying. Who knows? You may be like me and work
your way up to Royalty. And I’ll be glad
to talk one-on-one, but I think we have to –>>There’s one question.>>Edward Tenner: I’m sorry. Yeah.>>Very short question. Yeah. What’s your view and also
the expectation for the AI? You know, AI developing
very rapidly right now, and for the future. Somebody say so AI maybe
control the whole world. What’s your view and
the expectation for AI, artificial intelligence?>>Edward Tenner: Well,
as far as world control through artificial intelligence,
I mean I have some views on that, but with the limits
on time, I think I would have to talk about that privately,
so I hope you’ll understand, you know, that we
have a very, very, very persuasive analog
time keeper, and I need to follow
her direction. So thank you very much. [ Applause ]