Waiting..
Auto Scroll
Sync
Top
Bottom
Select text to annotate, Click play in YouTube to begin
00:00:00
the following is a conversation with doug lennon creator of psych a system that for close to 40 years and still today has sought to solve the core problem of artificial
00:00:11
intelligence the acquisition of common sense knowledge and the use of that knowledge to think to reason and to understand the world to support this podcast please check out our sponsors in the description
00:00:25
as a side note let me say that in the excitement of the modern era of machine learning it is easy to forget just how little we understand exactly how to build the kind of intelligence that
00:00:37
matches the power of the human mind to me many of the core ideas behind psych in some form in actuality or in spirit will likely be part of the ai system that achieves general super
00:00:49
intelligence but perhaps more importantly solving this problem of common sense knowledge will help us humans understand our own minds the nature of truth and finally how to be
00:01:01
more rational and more kind to each other this is the lex friedman podcast and here is my conversation with doug leonard psyc is a project launched by you in
00:01:14
1984 and still is active today whose goal is to assemble a knowledge base that spans the basic concepts and rules about how the world works in other words it hopes to capture
00:01:26
common sense knowledge which is a lot harder than it sounds can you elaborate on this mission and maybe perhaps speak to the various sub-goals within this mission when i was a
00:01:38
faculty member in the computer science department at stanford my colleagues and i did research in all sorts of artificial intelligence programs so natural language
00:01:51
understanding programs robots expert systems and so on and we kept hitting the very same brick wall our systems would have
00:02:02
impressive early successes and so if your only goal was academic namely to get enough material to write a journal article
00:02:15
that might actually suffice but if you're really trying to get ai then you have to somehow get past the brick wall and the brick wall was the programs didn't have what we would call common sense they
00:02:27
didn't have general world knowledge they didn't really understand what they were doing what they were saying what they were being asked and so very much like a clever dog performing tricks we could
00:02:40
get them to do tricks but they never really understood what they were doing sort of like when you get a dog to fetch your morning newspaper the dog might do that successfully but the dog has no idea what a newspaper is
00:02:53
or what it says or anything like that what does it mean to understand something can you maybe elaborate on that a little bit is it is understand an action of like combining little things together like through inference or is
00:03:06
understanding the wisdom you gain over time that forms the knowledge i think of understanding more like a um think of it more like the ground you stand on which
00:03:19
could be very shaky could be very unsafe but most of the time is not because underneath it is more ground and eventually you know rock and and other things but
00:03:32
layer after layer after layer that solid foundation is there and you rarely need to think about it you rarely need to count on it but occasionally you do and
00:03:44
i've never used this analogy before so bear with me but i think the same thing is true in in terms of getting computers to understand things which is uh you ask a computer a question for instance alexa
00:03:57
or some robot or something and um maybe it gets the right answer but if you were asking that of a human you could also say things like why
00:04:09
or how might you be wrong about this or something like that and the person you know would would answer you and you know it might be a little annoying if you have a small child and they keep asking why questions in series eventually you
00:04:23
get to the point where you throw up your hands and say i don't know it's just the way the world is but for many layers you actually have that that layered solid
00:04:34
foundation of support so that when you need it you can count on it and when do you need it well when things are unexpected when you come up against a situation which is novel for instance when you're driving
00:04:47
it may be fine to have a small program a small set of rules that cover you know 99 of the cases but that one percent of the time when something strange happens
00:05:00
you really need to draw on common sense for instance um my wife and i were driving recently and there was a trash truck in front of us and i guess they had packed it too full
00:05:11
and the back exploded and trash bags went everywhere and we had to you know make a split second decision are we going to slam on our brakes are we going to swerve into another lane are we going to just run it
00:05:25
over because there are cars all around us and you know in front of us was a large trash bag and we know what we throw away in trash bags probably not a safe thing
00:05:36
to run over um over on the the left was um a bunch of fast food restaurant um trash bags and it's like oh well those things are just like styrofoam and leftover food we'll run over that and so
00:05:48
that was a safe thing for us to to do now that's the kind of thing that's going to happen maybe once in your life and um but the point is that there's almost no telling
00:06:00
what little bits of knowledge about the world you might actually need in some situations which were unforeseen but see when you sit on that mountain or that ground that goes deep of
00:06:14
knowledge in order to make a split-second decision about fast food trash or random trash from the back of a trash truck you need to be able to leverage that
00:06:28
ground you stand on in some way it's not merely you know it's not enough to just have a lot of ground to stand on it's your ability to leverage it to utilize in a split like integrate it all together to
00:06:40
make that split second decision and i i suppose understanding isn't just having uh common sense knowledge to access
00:06:52
it's the act of accessing accessing it somehow like correctly filtering out the parts of the knowledge are not useful selecting only the useful parts and uh
00:07:05
effectively making conclusive decisions so let's tease apart two different tasks really both of which are incredibly important and even necessary if you're going to have this in a usable
00:07:18
useful usable fashion as opposed to say like library books sitting on a shelf and so on where the knowledge might be there but you know if a fire comes the books are going to burn because they don't know
00:07:31
what's in them and they're just going to sit there while they burn um so there are two there are two aspects of using the knowledge one is a kind of a theoretical how is it possible at all
00:07:44
and then the second aspect of what you said is how can you do it quickly enough right um so how can you do it at all is something that philosophers have grappled with and fortunately
00:07:56
philosophers 100 years ago and even earlier developed a kind of formal language like english it's called
00:08:08
predicate logic or first order logic or something like predicate calculus and so on so there's a way of representing things in this formal language which
00:08:21
enables a mechanical procedure to sort of grind through and algorithmically produce all of the same logical entailments all the same logical conclusions that you or i would from
00:08:35
that same set of pieces of information that are represented that way um so um that that sort of raises um a couple questions one is how do you get all this information from say
00:08:49
observations and english and so on into this logical form right and secondly how can you then efficiently um run these algorithms to actually get the information you need in in the case i
00:09:02
mentioned in a tenth of a second rather than say i'm in you know 10 hours or 10 000 years of computation um and those are both really important um uh questions and like a corollary
00:09:15
addition to the first one is how many such things do you need to gather for it to be useful in certain contexts so like what in order you mention philosophers in order to capture this world and
00:09:29
represent it in a logical way and with a formal logic like how many statements are required is it five is it ten is it ten trillion is it
00:09:40
like that that's as far as i understand is uh probably still an open question it may forever be an open question uh just to say like definitively about to describe the universe perfectly well
00:09:53
how many facts do you need i i guess i'm going to disappoint you by giving you an actual answer to your question okay um well no this sounds exciting yes okay so so now we have like um three
00:10:07
three things to talk about we'll keep adding more that's okay the first and the third are related yes um so let's leave the efficiency question aside for now so um how how does all this
00:10:20
information get represented in logical form so that these algorithms resolution theorem proving and other algorithms can actually grind through all the logical consequences of what you
00:10:33
said and that ties into your question about well how many of these things do you need because if the answer is small enough then by hand you could write them out one at a time
00:10:46
so in the 19 early 1984 i held a meeting at stanford where i was a
00:10:57
faculty member there where we assembled about half a dozen of the smartest people i know people like alan newell and marvin minsky and alan
00:11:11
kaye and um um a few others was firemen there by chance because he he liked your he commented about your system urisco at the time no no he he wasn't part of this meeting um but that's a heck of a
00:11:24
meeting anyway i think ed feigenbaum was there i think um josh lederberg was there so we we have um all these different um smart people and we were um we came together
00:11:37
to address the question that that you raised which is if it's important to represent common sense knowledge and world knowledge in order for ais to not be brittle in order for ais not to just have the
00:11:50
veneer of intelligence well how many pieces of common sense how many if then rules for instance would we have to actually write in order to essentially cover what
00:12:02
what people expect perfect strangers to already know about the world and i expected there would be an enormous divergence of opinion um and computation
00:12:14
but amazingly everyone got an answer which was around a million and one person one person got the answer by saying well look you can only burn into human
00:12:28
long-term memory a certain number of things per unit time like maybe one every 30 seconds or something and other than that it's just short-term memory and it flows away like water and so on so by the time you're say 10 years old
00:12:41
or so um how many things could you possibly have burned into your long-term memory and it's like about a million another person went a completely different direction and said well if you look at the number of words
00:12:55
in a dictionary not a whole dictionary but for someone to essentially be considered to be fluent in a language how many words would they need to know and then about how many
00:13:06
things about each word would you have to tell it and so they got to a million that way another another person said well let's actually look at one single
00:13:18
short one volume desk encyclopedia article and so we'll look at you know what was like a um a four paragraph article or something i think about grebes grebes are a type of waterfowl
00:13:32
and if we were going to sit there and represent every single thing that was there um how many assertions or rules or statements would we have to write in this logical language and so on and then
00:13:44
multiply that by all of the number of articles that there were and so on so all of these estimates came out with a million and so if you do the math it turns out that like oh well then maybe in
00:13:58
something like a hundred years in one or two person centuries we could actually get this written down by hand and
00:14:11
a marvelous coincidence an opportunity existed right at that point in time the early 1980s there was something called the japanese fifth generation computing effort japan
00:14:24
had threatened to do in computing and ai and hardware what they had just finished doing in consumer electronics on the automotive industry namely resting control away from the united states and more
00:14:36
generally away from the west and so america was scared and congress did something that's how you know it was a long time ago because congress did something congress passed something called the
00:14:48
national cooperative research act ncra and what it said was hey all you big american companies that's also how you know it was a long time ago because they were american companies rather than multinational companies hey all you big
00:15:02
american companies normally it would be an anti-trust violation if you colluded on r d but we promise for the next 10 years we won't prosecute any of you if you do that
00:15:14
to help combat this threat and so overnight the first two consortia research consortia in america sprang up both of them coincidentally in austin
00:15:26
texas uh one called sematech focusing on hardware chips and so on and then one called mcc the microelectronics and computer technology corporation focusing on more on software on databases and ai
00:15:40
and natural language understanding and things like that and i got the opportunity thanks to my friend woody bledsoe who was one of the people who founded
00:15:52
that to come and be its principal scientist and he said you know and he said admiral bob inman who was the person running mcc uh came and talked to me and said look professor you know you're talking about doing this
00:16:05
project it's going to involve person centuries of effort if you've only got a handful of graduate students you do the math it's going to take you like you know longer than the rest of your life to
00:16:17
finish this project but if you move to the wilds of austin texas we'll put 10 times as many people on it and you know you'll be done in a few years and so that was pretty exciting and so um i did that i
00:16:31
took my leave from stanford i came to to austin i worked for mcc and good news and bad news the bad news is that all of us were off by an order of magnitude that it turns out what you need are tens
00:16:44
of millions of these pieces of knowledge about on every day sort of like if you have a coffee cup with stuff in it and you turn it upside down the stuff in it's going to fall out
00:16:56
so you need tens of millions of pieces of knowledge like that even if you take trouble to make each one as general as it possibly could be but
00:17:06
the good news was that thanks to initially the fifth generation effort and then later u.s government agency funding and so on we were able to get enough funding not for a couple
00:17:21
person centuries of time but for a couple person millennia of time which is what we've spent since 1984 getting psych to contain the tens of millions of rules that it needs in order
00:17:34
to really capture uh and span uh sort of not all of human knowledge but the things that you assume other people the things you count on other people knowing and
00:17:47
so by now we've done that and the good news is since you've waited 38 years just about to talk to me we're about at the at the end of that process so most of
00:17:59
what we're doing now is not putting in even what you would consider common sense but more putting in domain-specific application-specific knowledge about healthcare
00:18:12
in a certain hospital or about um oil um pipes getting um clogged up or whatever the applications happen to be so we've almost come full circle and we're doing things very much like the
00:18:25
expert systems of the 1970s and the 1980s except instead of resting on nothing and being brittle they're now resting on this massive pyramid if you will this massive lattice of common
00:18:38
sense knowledge so that when things go wrong when something unexpected happens they can fall back on more and more and more general principles eventually bottoming out in things like
00:18:50
for instance if we have a problem with the microphone one of the things you'll do is unplug it plug it in again and hope for the best right because that's one of the general pieces of knowledge you have in dealing with electronic equipment or
00:19:02
software systems or things like that is there a basic principle like that like is it possible to encode something that generally captures this idea of turn it off and turn it back on and see
00:19:14
if it fixes oh absolutely that's one of the things that that's like news that's actually one of the fundamental laws of nature i believe i wouldn't i wouldn't call it a law it's
00:19:27
it's more like a um it seems to work every time so it sure sure is like looks like a law i don't know so that um that basically um covered the um the resources needed and
00:19:39
then we had to devise a method to actually figure out well what are the tens of millions of things that we need to tell the system and for that we found a few techniques which worked really well one
00:19:52
is to take um any piece of text almost it could be an advertisement it could be a transcript it could be a novel it could be an article and don't pay attention to the actual
00:20:04
type that's there the the black space on the white page pay attention to the complement of that the white space if you will so what did the writer of this sentence assume that the reader already knew
00:20:16
about the world for instance if they used a pronoun how did they figure out that why did they think that you would be able to understand what the intended referent of that pronoun was if they used an
00:20:28
ambiguous word how did they think that you would be able to figure out what they meant by that word the other thing we look at is the gap between one sentence and the next one what are all the things that the writer
00:20:42
expected you to fill in and infer occurred between the end of one sentence and the beginning of the other so like if the sentence says fred smith robbed the third national bank period
00:20:55
he was sentenced to 20 years in prison period well between the first sentence and the second you're expected to infer things like fred got caught fred got arrested fred went to jail fred had
00:21:08
a trial fred was found guilty and so on if my next sentence starts out with something like the judge dot dot dot um then you assume it's the judge at his trial if my next sentence starts out something like the arresting officer dot
00:21:21
dot dot you assumed that it was the police officer who arrested him after he committed the crime and so on so um that's it those are two techniques for getting um that knowledge the the other thing we
00:21:33
sometimes look at is uh sort of like fake news or uh sort of humorous um onion headlines or um headlines in um the weekly world news if you know what that is or the national enquirer where
00:21:47
it's like oh we don't believe this then we introspect on why don't we believe it so they're things like um b17 lands on the moon you know it's like why don't we what do we know about the world that causes us to believe that
00:22:00
that's just silly or uh something like that or um another thing we look for are contradictions um where with things which can't both be true um and we say to like what is it that we know that
00:22:12
causes us to know that both of these can't be true at the same at the same time for instance in one of the weekly world news um editions in one article it talked about how elvis was cited um you know even
00:22:26
though he was uh you know getting on in years and so on and another article in the same one talked about people seeing elvis ghost okay so it's like why why do we believe that at least one of these articles you
00:22:37
know must be wrong and so on so um so we have a series of techniques like that that enable our people um and by now uh we have about 50 people working full-time on this and have for for decades so we've put in the
00:22:51
thousands of person years of effort we've built up these tens of millions of rules we constantly police the system to make sure that we're saying things as generally as we
00:23:02
possibly can so you don't want to say things like um no mouse is also a moose because if you said things like that then you'd have to add another one or
00:23:15
two or three zeros onto the number of assertions you'd actually have to have so at some point we generalize things more and more and we get to a point where we say oh yeah for any two biological
00:23:27
taxons if we don't know explicitly that one is a generalization of another then almost certainly they're disjoint a member of one is not going to be a member of the other and so on so and the same thing with the elvis and the ghost
00:23:40
it has nothing to do with elvis it's more about the human nature and then the the mortality and well right in general things are not both alive and dead at the same time yeah and unless special
00:23:52
cats in in theoretical physics examples well that raises um a couple important points well that's the onion headline situation type of thing okay sorry but no no so so what you bring up is this really important point of like well how
00:24:05
do you handle exceptions and inconsistencies and so on um and one of the hardest lessons for us to learn it took us about five years to to really grit our teeth and
00:24:18
learned to love it is we had to give up global consistency so the knowledge base can no longer be consistent so this is a kind of scary thought i grew up watching star trek and
00:24:30
anytime a computer was inconsistent it would either freeze up or explode or take over the world or something bad would happen or if you come from a mathematics background once you can prove false you
00:24:43
can prove anything so that's not good and so on so that's why on the old knowledge-based systems were all very very consistent but the trouble is that
00:24:55
by and large our models of the world the way we talk about the world and so on there are all sorts of inconsistencies that creep in here and there that will sort of kill some attempt to build some enormous globally consistent
00:25:08
knowledge base and so what we had to move to was a system of local consistency so a good analogy is you know that the surface of the earth is more or less spherical
00:25:20
globally but you live your life every day as though the surface of the earth were flat you know when you're talking to someone in australia you don't think of them as being oriented upside down to
00:25:32
you when you're planning a trip you know even if it's a thousand miles away you may think a little bit about time zones but you rarely think about the curvature of the earth and so on and for most purposes you can live your whole
00:25:43
life without really worrying about that because the earth is locally flat in much the same way the psych knowledge base is divided up into almost like tectonic plates which are individual
00:25:56
contexts and each context is more or less consistent but there can be small inconsistencies at the boundary between one context and the next one and so on and so by the time you move say 20
00:26:10
contexts over there could be glaring inconsistencies so eventually you get from the normal modern real world context that we're in right now to something you know like um road runner
00:26:23
cartoon context where physics is very different and in fact life and death are very different because no matter how many times he's killed you know the coyote comes back in the next scene and and so on
00:26:35
so um that that was a hard lesson to learn and we had to make sure that our representation language the way we the way that we actually encode the knowledge and represent it was expressive enough that we could talk
00:26:46
about things being true in one context and false in another things that are true at one time and false in another things that are true let's say in one region like one country but false in another things that are true in one
00:26:59
person's belief system but false in another person's belief system things that are true at one level of abstraction and false at another for instance at one level of abstraction you think of this table as a solid
00:27:12
object but at you know down at the atomic level it's mostly empty space and so on so then that's fascinating and but it puts a lot of pressure on context to do a lot of work so you say tectonic
00:27:24
plates is it possible to formulate contacts that are general and big that do this kind of capture of knowledge bases or do you then get turtles on top of turtles again where there's just a huge
00:27:38
number of contexts so it's good you ask that question because you're pointed in the right direction which is you want context to be first class objects in your system's knowledge base in
00:27:50
particular in sykes knowledge base and by first class object i mean that it should we should be able to have psych think about and talk about and reason about one context or another context the same
00:28:03
way it reasons about coffee cups and tables and people and fishing and so on and so contexts are just terms in its language just like the ones i mentioned
00:28:14
and so psych can reason about context context can arrange hierarchically and so on and so you can say things about let's say
00:28:26
things that are true in the modern era things that are true in a particular year would then be a sub context of the the things that are true in um abroad let's say a century or a millennium or
00:28:40
something like that things that are true in austin texas are generally going to be a specialization of things that are true in texas which is going to be a specialization of things that are true in the united states and
00:28:53
so on and so you don't have to say things over and over over again at all these levels you just say things at the most general level that it applies to and you only have to say it once and then it
00:29:05
essentially inherits to all these more specific contexts ask a slightly technical question is this inherent in an inheritance a tree or a graph oh you definitely have to think of it as a graph
00:29:18
so we could talk about for instance why the japanese fifth generation computing effort failed there were about half a dozen different reasons one of the reasons they failed was because they tried to represent knowledge as a tree
00:29:32
rather than as a graph and so each node in their representation could only have one parent node so if you had a table that was a
00:29:44
wooden object a black object a flat object and so you have to choose one and that's the only parent it could have when of course you know depending on what it is you need to reason about it
00:29:56
sometimes it's important to know that it's made out of wood like if we're talking about a fire sometimes it's important to know that it's flat if we're talking about resting something on it and so on so um one of the one of the problems was
00:30:10
that they wanted a kind of dewey decimal numbering system for all of their concepts which meant that each node could only have at most 10 children and each node could only have one parent
00:30:23
and while that does enable the dewey decimal type numbering of concepts labeling of concepts it prevents you from representing all the things you need to about objects in
00:30:36
our in our world and that was one of the things which um they never were able to overcome and i think that was one of the main reasons that that project failed so we'll return to some of the doors you've opened but if we can go back to that
00:30:48
room in 1984 around there with marvin minsky yeah but by the way i should mention that marvin um wouldn't do his estimate until someone brought him an envelope so that he could literally do a back of the
00:31:02
envelope calculations to come up with his number well because i i feel like the conversation in that room is an important one you know this this how um
00:31:15
sometimes science is done in this way a few people get together and plant the seed of ideas and they reverberate throughout history and some some kind of dissipating disappear and some
00:31:27
you know drake equation and you know they you know seems like a meaningless equation somewhat meaningless but i think it drives and motivates a lot of scientists and when the aliens finally show up that equation will get even more uh
00:31:40
valuable because then we'll get be able to in the long arc of history the drake equation will um will prove to be quite useful i think in that same way a conversation
00:31:52
of just how many facts are required to capture the basic common sense knowledge of the world that's a fascinating question i want to distinguish between what you think of as facts and the kind of things that we represent so
00:32:05
um we we map to and essentially make sure that psych has the ability to as it were read and access the kind of facts you might find say in wikidata or stated in a wikipedia
00:32:18
article or something like that so what we're representing the things that we need a small number of tens of millions of are more like rules of thumb rules of good guessing things which are usually
00:32:29
true and which help you to make sense of the facts that are on sort of sitting off in some database or some other more static story so they're almost like platonic forms so like when you read stuff on
00:32:43
wikipedia that's going to be like projections of those ideas you read an article about the fact that elvis died that's a projection of the idea that uh humans are mortal and like you know very
00:32:56
few wikipedia articles will write humans are immortal exactly and that's what i meant about ferreting out the unstated things in texas what are all the things that were assumed and so those are things like um
00:33:09
if you have a problem with something turning it off and on um often fixes it for reasons we don't really understand and we're not happy about or people can't be both alive and dead at the same time and or water flows
00:33:22
downhill if you search online for water flowing uphill and water flowing downhill you'll find more references for water flowing uphill because it's used as a kind of a metaphorical reference for some unlikely
00:33:34
thing because of course everyone already knows that water flows downhill so why would anyone bother saying that do you have a word you prefer because we said facts isn't the right word is there a word like concepts i
00:33:48
would say assertions assertions or rules because i'm not talking about rigid rules but rules of thumb but assertions is a nice one that covers all of these things yeah as a programmer
00:34:01
to me a cert has a very dogmatic authoritarian feel to them i'm sorry [Laughter] i'm so sorry okay but assertions works okay so if we go back to that room with
00:34:13
marvel minsky with you all these seminal figures uh uh ed foggy mom thinking about this very philosophical but also engineering
00:34:24
question we can also go back a couple of decades before then and thinking about artificial intelligence broadly when people were thinking about you know how do you create super intelligent systems
00:34:37
general intelligence and i think people's intuition was off at the time and i mean this continues to be the case that we're not when we're grappling with these
00:34:50
exceptionally difficult ideas we're not always it's very difficult to truly understand ourselves when we're thinking about the human mind to to introspect how difficult it is to engineer intelligence to solve
00:35:04
intelligence we're not very good at estimating that and you are somebody who has really stayed with this question for decades do you what's your sense
00:35:15
from the 1984 to today have you gotten a stronger sense of just how much knowledge is required so you've kind of said with some level of certainty that's still on the order of magnitude of tens
00:35:29
of millions right but for the first several years i would have said that it was on the order of one or two million yeah and so um it took it took us about five or six years to realize that we were off
00:35:42
um by by a factor of 10. but i guess what i'm asking you know marvin miska is very confident in the 60s when you say yes right what's your sense
00:35:53
if you you know 200 years from now you're still you know you're you're not going to be any longer in this particular biological body but your brain will still be uh in the digital form and you'll be
00:36:09
looking back would you think you were smart today like your intuition was right or do you think you may be really off so i i think i'm i'm right enough and
00:36:22
let me explain what i mean by that which is um sometimes like if you have an old-fashioned pump you have to prime the pump yeah and then eventually it starts so i think i'm i'm right enough in the
00:36:35
sense that what we what we've built yeah even if it isn't so to speak everything you need it's primed the knowledge pump enough that psych can now itself
00:36:48
help to learn more and more automatically on its own by reading things and understanding and occasionally asking questions like a student would or something and by doing experiments and discovering things on
00:37:01
its own and so on so through a combination of psych-powered discovery and psych-powered reading it will be able to bootstrap itself maybe it's the final two percent maybe
00:37:14
it's the final 99 so even if i'm if i'm wrong all i really need to build is a system which has primed the pump enough that it can begin
00:37:26
that cascade upward that self-reinforcing sort of quadratically or maybe even exponentially increasing path upward um that we get from for
00:37:38
instance talking with each other that's why um humans today know so much more than humans a hundred thousand years ago we're not really that much smarter than people were a hundred thousand years ago but there's so much more knowledge and
00:37:50
we have language and we can communicate we can check things on google and so on so effectively we have this enormous power at our fingertips and there's almost no limit to how much you could learn if you wanted to because
00:38:04
you've already gotten to a certain level of understanding of the world that enables you to read all these articles and understand them that enables you to go out and if necessary do experiments although that's slower as a way of
00:38:16
gathering data and so on and and i think this is really an important point which is if we have artificial intelligence real general artificial intelligence human level artificial intelligence
00:38:29
then people will become smarter um it's not so much that it'll be us versus the ais it's more like us and the ais together we'll be able to do things
00:38:41
that require more creativity that would take too long right now but we'll be able to do lots of things in parallel we'll be able to misunderstand each other less [Music] there's all sorts of
00:38:54
value that effectively for an individual would mean that individual will for all intents and purposes be smarter and that means that humanity as a species will be smarter and when was the
00:39:06
last time that any invention qualitatively made a huge difference in human intelligence um you have to go back a long ways it wasn't like the internet or the computer or mathematics or something
00:39:19
it was all the way back to the development of language we sort of look back on pre-linguistic cavemen as well you know they they weren't really intelligent were they they weren't
00:39:33
really human were they and i think that um as you said 50 100 200 years from now people will look back on people today um right before the advent of these sort of
00:39:46
lifelong general ai muses and say you know those poor those poor people they weren't really human were they exactly so you said a lot of really interesting things by the way i would
00:40:00
maybe uh try to argue that the internet is on is on the order of um the kind of big leap in improvement that
00:40:12
the invention of language was it's certainly a big leap in one direction we're not sure whether it's upward or downward well i i mean very specific parts of the internet which is access to information like a website like wikipedia
00:40:24
like ability for human beings from across the world to access information so very quickly so i i could take either side of this argument and since you just took one side i'll give you the other side which is that
00:40:36
almost nothing has done more harm than something like the internet and access to that information in two ways one is it's made people more
00:40:48
globally ignorant in the same way that calculators made us more or less innumerate so when i was growing up we had to use slide rules we had to be able to estimate yeah and so
00:41:02
on today people don't really understand numbers they don't really understand math they don't really estimate very well at all and so on they don't really understand the difference between trillions and
00:41:14
billions and millions and so on very well because calculators do that all for us um and um thanks to uh things like the internet and search engines um
00:41:27
that same kind of juvenilism is reinforced in making people essentially be able to live their whole lives not just without being able to do arithmetic and estimate but now without actually having to really know almost
00:41:40
anything because anytime they need to know something they'll just go and look it up right and i can tell you could play both sides of this and it is a double-edged sword you can of course say the same thing about language probably people when they invented language they
00:41:52
would criticize you know it used to be we would just if we're angry we would just kill a person and if we're in love we'll just have sex with them and now everybody's writing poetry and bullshit that you know you should just be direct
00:42:05
you should like have physical contact enough with this words and books and it's you're you're not actually experiencing like if you read a book you're not experiencing the thing this is nonsense that's right if you read a
00:42:17
book about how to make butter that's not the same as if you had to like learn it and do it yourself exactly and so on so so let's just say that something is gained but something is lost every time you have um these these
00:42:28
sorts of dependencies um on technology um and overall i think that the um having smarter individuals and having smarter ai augmented
00:42:41
human species will be one of the few ways that we'll actually be able to overcome some of the global problems we have involving poverty and starvation and global warming and
00:42:54
overcrowding all the other problems that that are besetting the planet we really need to be smarter and they're really only two routes to being smarter one is through
00:43:06
biochemistry and genetics genetic engineering the other route is through having general ais that augment our intelligence
00:43:19
and you know hopefully one of those two ways of paths to salvation will will come through before it's too late yeah absolutely i agree with you and
00:43:30
obviously as an engineer i have um i have a better sense and an optimism about the technology side of things because you can control things there more biology is just such a giant mess
00:43:42
we're living through a pandemic now there's so many ways that nature can just be just destructive and destructive in a way where it doesn't even notice you you're not it's not like a battle of humans versus virus it's just like
00:43:55
huh okay and then you can just wipe out an entire species the other problem with the internet is that it has enabled us to surround ourselves with an echo
00:44:08
chamber with a bubble of like-minded people which means that you can have truly bizarre theories conspiracy theories fake news and so on promulgate
00:44:21
and surround yourself with people who essentially reinforce um what you want to believe or what you already believe about the world and in the in the old days that was much
00:44:33
harder to do when you had say only three tv networks or even before when you had no tv networks and you had to actually like look at the world and make your own reasoned decisions i like the push and pull of our dance that we're doing
00:44:46
because then i'll just say in the old world having come from the soviet union because you had won a couple of networks then propaganda could be much more effective and then the government can overpower its people by telling you the truth
00:44:59
and then starving millions and torturing millions and putting millions into camps and starting wars with a propaganda machine allowing you to believe that you're actually good doing good in the world
00:45:11
with the internet because of all the quote unquote conspiracy theories some of them are actually challenging the power centers the very kind of power centers that a century ago would have led to
00:45:23
the death of millions so there's a it's again this double-edged sword and i i very much agree with you on the ai side it's it's often an intuition that people have that somehow ai will be used to
00:45:35
maybe overpower people by certain select groups and to me it's not all obvious that that's the likely scenario to me the likely scenario especially just having observed the trajectory of technology is it'll be
00:45:49
used to empower people it'll be used to extend the capabilities of individuals across the world because there's a lot of money to be made that way like improving people's lives you can
00:46:02
make a lot of money agree i think that the the main the main thing that ai prosthesis ai amplifiers will do for people is make it easier maybe even
00:46:15
unavoidable for them to do good critical thinking um so pointing out logical fallacies logical contradictions and so on in things that they otherwise would just
00:46:28
blithely believe pointing out essentially data which they should take into consideration if they really want to
00:46:41
learn the truth about something and so on so i think doing not just educating in the sense of pouring facts into people's heads but educating in the sense of arming people
00:46:52
with the ability to do good critical thinking is enormously powerful the education system that we have in the u.s and worldwide generally don't do a good job of that um
00:47:05
but i believe that the ai the ais can and will in the same way that everyone can have their own um um alexa or siri or
00:47:18
google assistant or whatever um um everyone will have this sort of cradle to grave assistant which will get to know you which you'll get to trust it'll model you you'll model it and
00:47:31
it'll call to your attention things which will in some sense make your life better easier less mistake written and so on less regret written
00:47:43
if you listen to listen to it yeah i'm in full agreement with you about this like space of technologies and i think it's super exciting from my perspective integrating emotional intelligence so even things like
00:47:58
friendship and companionship and love into those kinds of systems as opposed to helping you just grow intellectually as a human being allow you to grow emotionally which is ultimately what makes life
00:48:11
amazing is to to sort of you know the the old pursuit of happiness so it's not just the pursuit of reason it's the pursuit of happiness yes the full spectrum well let me um sort of because you mentioned so many
00:48:24
fascinating things let me jump back to the idea of automated reasoning so the acquisition of new knowledge has been done in this very interesting way but primarily by humans
00:48:37
doing this um yes you can think of monks in their cells in medieval europe um you know carefully illuminating manuscripts and so on it's a very difficult and amazing process actually
00:48:50
because it allows you to truly ask the question about the in the white space what is assumed i think this exercise is um like very few people do this right
00:49:03
they just do it subconsciously they perform by definition but because because those pieces of elided of omitted information of those missing steps as it were
00:49:16
are pieces of common sense if you actually included all of them it would it would almost be offensive or confusing to the readers like why are they telling me all these of course i know that you know all these things
00:49:29
and so um so it's one of these things which almost by its very nature um has has almost never been explicitly written down anywhere because
00:49:41
by the time you're old enough to talk to other people and so on um you know if you survived to that age presumably you already got pieces of common sense like um you know if something causes you pain
00:49:53
whenever you do it probably not a good idea to keep doing it uh so what ideas do you have given how difficult this step is what ideas are there for how to do it
00:50:05
automatically without using humans or at least not um you know doing like a large percentage of the work for humans and then humans only do the very high level supervisory work
00:50:18
so we have um uh in fact two directions we're pushing on very very heavily uh currently at sitecore and one involves natural language understanding and the ability to read what people have explicitly written down
00:50:31
and and to to pull knowledge in that way but the other is to build a series of knowledge editing tools knowledge entry tools knowledge
00:50:44
capture tools knowledge testing tools and so on think of them as like user interface suite of software tools if you want something that will help people to more
00:50:57
or less automatically expand and extend the system um in areas where for instance they want to build some app have it do some application or something like that so i'll give you an example of
00:51:09
one which is something called abduction so you've probably heard of like deduction and induction and so on but abduction is
00:51:21
unlike this abduction is not sound it's just useful so uh for instance um deductively if someone is out in the rain and they're going to get all wet
00:51:34
and when they enter room they might be all wet and so on so that's deduction but if someone were to walk into the room right now and they were dripping wet
00:51:46
we would immediately look outside to say oh did it start to rain or something like that now why did we say maybe it started to rain that's not a sound logical inference but
00:51:58
it's certainly a reasonable abductive leap to say well one of the most common ways that a person would have gotten dripping wet is if they had gotten caught out in the rain or something like
00:52:11
that um so um what what does that have to do with what we were talking about so suppose you're building one of these applications and the system gets some answer wrong and you say oh yeah the answer to this
00:52:24
question is this one not the one you came up with then what the system can do is it can use everything it already knows about common sense general knowledge the domain you've already been telling it
00:52:36
about and context like we talked about and so on and say well here are seven alternatives each of which i believe is plausible given everything i
00:52:48
already know and if any of these seven things were true i would have come up with the answer you just gave me instead of the wrong answer i came up with is one of these seven things true and then you the expert will look at those
00:53:02
on seven things and say oh yeah number five is actually true and so without actually having to tinker down at the level of logical assertions and so on um you'll be able to educate
00:53:14
the system in the same way that you would help educate another person who you were trying to apprentice or something like that so that that significantly reduces the mental effort or significantly increases the
00:53:27
efficiency of the teacher the human teacher exactly and it makes more or less anyone able to to be a teacher um in that um in that way so that's that's part of the the answer and then the other is that
00:53:40
the system on its own will be able to um through reading through conversations with other people and so on um learn the same way that um you or i or other humans do
00:53:54
first of all that's that's a beautiful vision um i'll have to ask you about semantic webinar in a second here but first uh are there when we talk about specific techniques do you find something inspiring or
00:54:07
directly useful from the whole space of machine learning deep learning these kinds of spaces of techniques that have been shown effective for certain kinds of problems in the recent uh
00:54:19
now decade and a half i i think of the machine learning work as more or less what our right brain hemispheres do so being able to
00:54:32
take a bunch of data and recognize patterns being able to statistically infer things and so on and you know i certainly wouldn't want to
00:54:44
not have a right brain hemisphere but i'm also glad that i have a left brain hemisphere as well something that can metaphorically sit back and puff on its pipe and think about this thing over here it's like why might
00:54:57
this have been true and what are the implications of it how should i feel about that and why and so on so i'm thinking more deeply and slowly um what kahneman called thinking slowly
00:55:11
versus thinking quickly whereas you want machine learning to think quickly but you want the ability to think deeply even if it's a little slower so i'll give you an example of a project
00:55:22
we did recently with nih involving the cleveland clinic and a couple other institutions that we ran a project for and what it did was it took um guasa's
00:55:35
genome-wide association studies um those are sort of big databases of patients that came into a hospital they got their dna sequenced because the
00:55:47
cost of doing that has gone from infinity to billions of dollars to 100 dollars or so and so now patients routinely get their dna sequence so you have these big databases of
00:56:00
the snips the single nucleotide polymorphisms the point mutations in a patient's dna and the disease that happened to bring them into the hospital so now you can do correlation studies
00:56:13
machine learning studies of which mutations are associated with and led to which physiological problems and diseases and so on like getting arthritis and and so
00:56:26
on and the problem is that those correlations turn out to be very spurious they turn out to be very noisy very many of them have led doctors onto wild goose chases
00:56:38
and so on and so they wanted a way of eliminating or the bad ones are focusing on the good ones and so this is where psych comes in which is psych takes those sort of a to z
00:56:49
correlations between point mutations and um medical condition that needs treatment and we say okay let's use all this public knowledge and common sense knowledge um about what reactions occur
00:57:04
where in the human body what polymerizes what what catalyzes what reactions and so on and let's try to put together a 10 or 20 or 30 step causal explanation
00:57:17
of why that mutation might have caused that medical condition and so psych would put together in some sense some rube goldberg like chain that would say oh yeah that mutation
00:57:30
if it got expressed would be this um altered protein which because of that if it got to this part of the body would catalyze this reaction and by the way that would cause more bioactive vitamin d in the person's blood and anyway ten
00:57:45
steps later that screws screws up bone resorption and that's why this person got osteoporosis early in life and so on so that's human interpretable or at least doctor humanity exactly yeah and
00:57:57
the important thing even more than that is you shouldn't really trust that 20-step rube goldberg chain any more than you trust that initial a to z correlation except
00:58:10
two things one if you can't even think of one causal chain to explain this then that correlation probably was just noise to begin with and secondly and
00:58:22
even more powerfully along the way that causal chain will make predictions like the one about having more bioactive vitamin d in your blood so you can now go back to the data about these patients
00:58:35
and say by the way did they have slightly elevated levels of bioactive vitamin d in their blood and so on and if the answer is no that strongly disconfirms your whole causal chain and if the
00:58:48
answer is yes that somewhat confirms that causal chain and so using that we were able to take this on these correlations from this gloss database and we were able to um
00:59:00
essentially focus the the doctors focused the researchers attention on the very small percentage of correlations that had some explanation and even better some explanation that also made some
00:59:13
independent prediction that they could confirm or disconfirm by looking at the data so think of it like this kind of synergy where you want the right brain machine learning to quickly come up with possible answers you want the left brain
00:59:27
psych-like ai to you know think about that and not like think about why that might have been the case and what else would be the case if that were true and so on and then suggest things back to the right brain
00:59:39
to quickly check out again um to um so it's that kind of synergy back and forth which i think is really what's going to lead to general ai not um narrow brittle machine learning systems and not just
00:59:54
something like psych okay so that's a brilliant synergy but i was also thinking in terms of the automated expansion of the knowledge base you mentioned nlu this is very early days in the machine
01:00:06
learning space of this but self-supervised learning methods you know you have these language models gpt-3 and so on they just read the internet and they form representations
01:00:18
that can then be mapped to something useful the question is what is the useful thing uh like they're now playing with a pretty cool thing called openac codex which is generating programs from documentation okay that's kind of useful
01:00:31
it's cool but my question is can it be used to generate um in part maybe with some human supervision a psych-like assertions
01:00:42
help feed psych more assertions from this giant body of internet data yes that that is in fact one of our goals is how can we harness machine learning how can we harness natural language processing
01:00:56
to increasingly automate the knowledge acquisition process the growth of psych and that's what i meant by priming the pump that you know if you you sort of learn things
01:01:08
at the fringe of what you know already you learn this new thing is similar to what you know already and here are the differences and the new things you had to learn about it and so on so the more you know the more and more easily you can learn
01:01:21
new things but unfortunately inversely if you don't really know anything then it's really hard to learn anything and so if you're not careful if you start out with too small
01:01:33
sort of a core to start this process it never really takes off and so that's why i view this as a pump priming exercise to get a big enough manually produced even though that's kind of ugly duckling technique put in
01:01:47
the elbow grease to produce a large enough core that you will be able to do all the kinds of things you're imagining without without sort of on ending up with the kind of um wacky brittlenesses
01:02:01
that we see for example in gpt3 um where it uh you know you'll tell it a story about um um you know someone uh uh putting a poison
01:02:14
um you know plotting to poison someone and so on and then the um you know then you know gpt3 says um what's you say what's the very next sentence the next sentence is oh yeah that person then drank the
01:02:25
poison they just put together it's like that's probably not what happened or someone or um if you go to siri and um um you know i think i have uh you know where can i go for um help with my um
01:02:39
alcohol problem or something it'll come back and say i found seven liquor stores near you right you know and you know so on so you know it's one of these things where um yes it may be helpful
01:02:51
most of the time it may even be correct most of the time but if it doesn't really understand what it's saying and if it doesn't really understand why things are true and doesn't really understand how the world world works
01:03:04
then some fraction of the time it's going to be wrong now if your only goal is to sort of find relevant information like search engines do then being right 90 of the time is
01:03:16
fantastic that's unbelievably great okay however if your goal is to like um you know save the the life of your child who has some medical problem or your goal is to uh be able to drive you know for the
01:03:29
next 10 000 hours of driving without getting into a fatal accident and so on then you know error rates down at the 10 level or even the 1 level not really acceptable
01:03:42
i like the model of with that learning happens at the edge and then you kind of think of knowledge as this fear so uh if you want a large sphere because the uh the learning is happening on the
01:03:55
surface exactly so you have the the what you can learn next increases quadratically as the diameter of that sphere um goes up it's nice because you think when you
01:04:06
know nothing it's like you can learn anything but the reality not really right if you know if you know nothing you can really learn nothing you can appear to learn so i'll also um
01:04:19
one of the anecdotes i could go back and um give you about why why i feel so strongly about this personally was in
01:04:31
1980 81 my daughter nicole was born and she's actually doing fine now but when she was a baby she was diagnosed as having meningitis and doctors wanted to do all these scary
01:04:44
things and my wife and i were very worried and we could not get a meaningful answer from her doctors about exactly why they believed this
01:04:57
what the alternatives were and so on and fortunately a friend of mine ted shortly was another assistant professor in computer science at stanford at the time and
01:05:09
he'd been building a program called mycin which was a medical diagnosis program that happened to specialize in blood infections like meningitis and so he had privileges at stanford hospital
01:05:21
because he was also an md and so we got hold of her chart and we put in her case and it came up with exactly the same diagnoses and exactly the same therapy recommendations but the
01:05:33
difference was because it was a knowledge-based system a rule-based system it was able to tell us step by step by step why this was the diagnosis and step by step
01:05:46
why this was the best on therapy the best procedure to um to to do for her and so on and there was a real epiphany because that made all the difference in the world instead of
01:05:58
blindly having to trust in authority we were able to understand what was actually going on and so at that at that time i realized that that really is what was missing in computer programs was that even if they
01:06:12
got things right because they didn't really understand the way the world works and why things are the way they are they weren't able to give explanations of their answer um
01:06:24
you know and you know it's one thing to to use a machine learning system that says this is what you should you know i i think you should get this operation and you say why and it says you know 0.83 and you say no in more detail why
01:06:36
and it says 0.831 you know that's not really very compelling and that's not really very helpful there's this idea of the semantic web and when i first heard about i just fell in love with the idea it was
01:06:50
the obvious next step for the internet sure and maybe you can speak about what is the semantic web what are your thoughts about it how your vision and mission and goals with psych are connected integrated
01:07:03
like are they dance partners are they aligned what are your thoughts there so think of the semantic web as a kind of knowledge graph and google already has something they call knowledge graph for example
01:07:16
which is sort of like a node and link diagram so you have these nodes that represent concepts or words or terms and then there are some arcs
01:07:29
that connect them that might be labeled and so you might have a node with like one person that represents one person and let's say a um a husband link
01:07:44
that then points to that person's husband and so there'd be then another link that went from that person labeled wife that went back to the um first node and so on so having having this kind of
01:07:58
representation is really good if you want to represent binary relations essentially relations between two things and if you so if you have um
01:08:12
equivalent of like three word sentences um you know like uh fred's wife is wilma or something like that you can represent that very nicely using these kinds of
01:08:26
graph structures or using something like the semantic web and and so on but the the problem is that very often what you want to be able to express
01:08:40
takes a lot more than three words and a lot more than simple graph structures like that to represent so for instance um if you've
01:08:52
read or seen romeo and juliet you know i could say to you something like uh remember when juliet drank the potion that put her into a kind of suspended anime animation when juliet drank that
01:09:05
potion what did she think that romeo would think when he heard from someone that she was dead and you could basically understand what i'm saying you could understand the
01:09:16
question you could probably remember the answer was well she thought that this friar would have gotten a message to romeo saying that she was going to do this but the friar didn't and so so
01:09:30
you're able to represent and reason with these much much much more complicated expressions that go way way beyond what simple um three as it were three word or forward
01:09:43
english sentences are which is really what the semantic web can represent and really what knowledge graphs can represent if you could step back for a second because it's funny you went into specifics and maybe you can elaborate
01:09:55
but i was also referring to semantic web as the vision of converting data on the internet into something that's interpretable understandable by machines oh of course
01:10:08
at that at that level so so we should say like what is the semantic web i mean you could say a lot of things but it might not be obvious to a lot of people when they do a google search
01:10:20
that just like you said while there might be something that's called a knowledge graph it's really boils down to keyword search ranked by the quality estimate of the
01:10:32
website integrating previous human based google searches and what they thought was useful it's like some weird combination of
01:10:44
like surface level hacks that work exceptionally well but they don't understand the content the full contents of the websites that they're searching so google does not
01:10:57
understand to the degree we've been talking about the word understand the contents of the wikipedia pages as part of the search process and the semantic web says let's try to get
01:11:10
come up with a way for the computer to be able to truly understand the contents of those pages that's the dream yes so let me let me first give you a an anecdote uh and then i'll answer your question
01:11:23
so there's a search engine you've probably never heard of called northern light and um it went out of business but the way it worked it was a kind of vampiric search engine and what it did was um it
01:11:36
didn't index the internet at all all it did was it negotiated and got access to data from the big search engine companies
01:11:49
about what query was typed in and where the user ended up being happy and actually um then you know they type in a completely different query unrelated query and so on so it
01:12:03
just went from query to the web page that seemed to satisfy them eventually and that's all so it had actual no understanding of what was being typed in
01:12:16
it had no statistical data other than what i just mentioned and it did a fantastic job it did such a good job that the big search engine company said oh we're not going to sell you this data anymore so then it went out of business
01:12:28
because it had no other way of taking users to where they would want to go and so on and of course the search engines are now using that kind of idea yes so um let's go back to what you said
01:12:40
about the semantic web so the dream tim berners-lee and others dream about the semantic web at a general level is of course
01:12:52
exciting and powerful and in a sense the right dream to have which is to replace the the kind of um
01:13:04
statistically statistically mapped linkages on the internet into something that's more meaningful and semantic and actually gets at the understanding of the content and so on
01:13:20
and eventually if you say well how can we do that there's um sort of a low road which is what the knowledge graphs are doing and and so on
01:13:32
which is to say well if we just use these simple binary relations we can actually get some fraction of the way toward understanding um and do something where you know in the in the land of the
01:13:45
the blind the one-eyed man is king kind of thing and so being able to even just have a toe in the water in the right direction is fantastically powerful um and so that's where a lot of people stop um but then you could say
01:13:59
well what if we really wanted to represent um and reason with um the full meaning of what's there for instance about um romeo and juliet
01:14:10
with reasoning about what juliet believes that romeo will believe that juliet believed you know and so on or if you look at on the news what um you know president biden believed that the leaders of the taliban would believe
01:14:23
about the leaders of afghanistan if they you know blah blah so in order to represent complicated sentences like that
01:14:35
let alone reason with them you need something which is logically much more expressive than these simple um triples than these simple knowledge graph type structures and so
01:14:48
on and that's why kicking and screaming we were led from something like the semantic web representation which is where we started in 1984
01:15:00
with frames and slots with those kinds of triples triple store representation we were led kicking and screaming to this more and more general logical language this higher order logic so
01:15:12
first we were led to first order logic and then second order and then eventually higher order so you can represent things like modals like beliefs desires intense expects and some of the nested ones you can represent
01:15:27
complicated kinds of negation you can represent the process you're going through in trying to answer the question so you can say things like oh yeah if you're trying to do this
01:15:40
problem by integration by parts and you recursively get a problem that's solved by integration by parts that's actually okay but if that happens a third time you're probably off on a wild
01:15:54
goose chase or something like that so being able to talk about the problem solving process as you're going through the problem solving process um it's called reflection and so um that's another so it's important to be able to
01:16:06
represent that exactly you need to be able to represent all of these things um because in fact people do represent them they do talk about them they do try and teach them to other people you do have rules of thumb that key off of them and
01:16:19
so on if you can't represent it um then it's sort of like someone with a limited vocabulary who can't understand as easily what you're trying to to tell them and so that's that's really why i think that the
01:16:32
the general dream the original dream of semantic web is exactly right on um but the implementations that we've seen um are sort of these toe in a wa in the water
01:16:45
little tiny baby steps in the right direction you should just dive in and and you know if if no one else is diving in then yes taking a baby step in the right direction is better than nothing
01:16:57
but it's not going to be sufficient to actually get you the um the realization of the semantic web dream which is what we all want from a flip side of that i always wondered you know i built a bunch
01:17:09
of websites just for fun whatever or say i'm a wikipedia contributor do you think there's a set of tools that i can help psych uh
01:17:23
interpret the website i create you know like this again pushing onto the semantic web dream is there something from the creator perspective that um could be done and one of the things you
01:17:34
said uh with cycorp and psych that you're doing is the tooling side making humans more powerful but is there any the other humans and the other side that create the knowledge like for example you and i are having a two three
01:17:47
whatever hour conversation now is there a way that i could convert this more make it more accessible to psych to machines do you think about that side of it i i'd love to see
01:18:00
exactly that kind of semi-automated understanding of what people write and what people say i think of it as a kind of
01:18:12
footnoting almost almost like the way that when you run something in say microsoft word or some other document preparation system google docs or something you'll get
01:18:25
underlining of questionable things that you might want to rethink either you spelled this wrong or there's a strange grammatical error you might be making here or something so i'd like to think in terms of
01:18:37
psych-powered tools that read through what it is you said or have typed in and and try to partially understand
01:18:52
what you've said and then you help them out exactly and then they put in little footnotes that will help other readers and they put in certain footnotes of the form i'm not sure what you meant here you
01:19:05
either meant this or this or this i bet if you take a few seconds to disambiguate this for me then i'll know and i'll have it correct for the next 100 people or the next hundred thousand
01:19:19
people who come here and if it doesn't take too much effort and you want people to understand your web your website content
01:19:33
not just be able to read it but actually be able to have systems that reason with it then yes it will be worth your small amount of time to go back and make sure
01:19:44
that the ai trying to understand it really did correctly understand it and you know let's say you run a a travel website or something like that and
01:19:57
people are going to be coming to it because of searches they did looking for looking for vacations that or trips that had certain properties and
01:20:10
might have been interesting to them for various reasons things like that and if you've explained what's going to happen on your trip then a system will be able to
01:20:22
mechanically reason and connect what this person is looking for with what it is you're actually offering and so if it understands that
01:20:34
there's a free day in geneva switzerland then if the person coming in happens to let's say be a nurse or something like that
01:20:47
then even though you didn't mention it if it can look up the fact that that's where the international red cross museum is and so on what that means and so on then it can basically say hey you might be interested in this trip because while
01:21:01
you have a free day in geneva you might want to visit that red cross museum and now even though it's not very deep reasoning little tiny factors like that may very well cause you to sign up for that trip
01:21:13
rather than some competitor trip yeah and so there's a lot of benefit with seo and i actually kind of think i think this about a lot of things which is the actual
01:21:25
interface the design of the interface makes a huge difference how efficient it is to be productive and also how um
01:21:36
full of joy the experience is yeah like i i mean i would love to help a machine and not from an ai perspective just as a human one of the reasons i really enjoy how tesla
01:21:49
have implemented their autopilot system is there's a sense that you're helping this machine learn now i think humans i mean having children pets people love doing that we we
01:22:02
there's joy to teaching absolutely for some people but i think for a lot of people and that if you create the interface where it feels like you're teaching as opposed to like uh like annoying
01:22:14
like correcting an annoying system more like teaching a child like innocent curious system i think i think you can literally just like several orders of magnitude scale the amount of good
01:22:27
quality data being uh added to something like psych what what you're suggesting is much better even than you thought it was one of the one of the experiences that we've all
01:22:41
had in our lives is that we thought we understood something but then we found we really only understood it when we had to teach it or explain it to someone or help our child
01:22:54
do homework based on it or something like that despite the universality of that kind of experience if you look at educational software today
01:23:05
almost all of it has the computer playing the role of the teacher and the student plays the role of the student but as i just mentioned you can get a lot of learning to happen
01:23:19
better and as you said more enjoyably if you are the mentor or the teacher and so on so we developed a program called math craft to help sixth graders better understand math
01:23:33
it doesn't actually try to teach you the player anything what it does is it casts you in the role of a student essentially who has classmates
01:23:47
who are having trouble and your job is to watch them as they struggle with some math problem watch what they're doing and try to give them good advice to get them to understand what they're doing wrong and so on
01:24:00
and the trick from the point of view of psych is it has to make mistakes it has to play the role of the student who makes mistakes but it has to pick mistakes which are just at the fringe of
01:24:13
what you actually understand and don't understand and so on so it pulls you into a deeper and deeper level of understanding of the subject and so if you give it good advice about what it
01:24:26
should have done instead of what it did and so on then psych knows that you now understand that mistake you won't make that kind of mistake yourself as much anymore so psych stops making that mistake because
01:24:38
there's no pedagogical usefulness to it so from your point of view as the player you feel like you've taught it something because it used to make this mistake and now it doesn't and so on so this tremendous um reinforcement and
01:24:52
engagement um because of that and so on so having a system that plays the role of a student and having the player play the role of the mentor is enormously
01:25:05
powerful type of metaphor just important way of having this sort of interface designed in a way which will facilitate
01:25:16
exactly the kind of learning by teaching that that goes on all the time in our lives and yet which is not reflected anywhere almost in a modern education system it
01:25:31
was reflected in the education system that existed in europe in the 17 and 1800s monitorial and lancastrian education systems it occurred in the
01:25:44
one-room schoolhouse in the american west in the 1800s and so on where you had one school room with one teacher and it was basically you know five-year-olds to 18
01:25:57
year olds who were students and so while the teacher was doing something half the half of the students would have to be mentoring the younger kids
01:26:08
and so on and that turned out to of course with scaling up of education that all went away and that incredibly powerful experience just went away from
01:26:20
the whole education institution as we know it today sorry for the romantic question but what is the most beautiful idea you've learned about artificial intelligence knowledge reasoning
01:26:33
from working on psych for 37 years or maybe what is the most beautiful idea surprising idea about psych to you when i look up at the stars i kind of
01:26:47
want like that amazement you feel that wow and you are part of creating one of the greatest one of the most fascinating efforts in artificial intelligence history so which
01:27:00
element brings you personally joy this may sound contradictory but i i think it's the feeling that this will be the only time in history
01:27:17
that anyone ever has to teach a computer this particular thing that we're now teaching it it's it's like painting starry night you only have to do that once or
01:27:31
creating the pieta you only have to do that once you know it's not it's not like a it's not like a singer who has to keep you know it's not like bruce springsteen having to to sing his greatest hits over
01:27:44
and over again at different concerts it's more like a painter creating a work of art once and then that's enough it doesn't have to be created again and so i really get the
01:27:57
sense of we're telling the system things that it's useful for it to know it's useful for a computer to know for an ai to know and if we do our jobs right when we do
01:28:09
our jobs right no one will ever have to do this again for this particular piece of knowledge it's very very exciting yeah i guess there's a sadness to it too
01:28:21
it's like there's a magic to being a parent and raising a child and teaching them all about this world but you know there's billions of children right like born or whatever whatever that number is it's a
01:28:33
large number number of children and a lot of parents get to experience that joy of teaching and with ai systems you know uh
01:28:44
they at least the current constructions they remember you know you don't get to experience the joy of teaching um a machine millions of times better come work for us before
01:28:55
it's too late then exactly that's a good that's a good hiring pitch yeah um yeah it's true but then there's also you know it's a project that continues forever in
01:29:08
some sense just like wikipedia yes you get to a stable base of knowledge but knowledge grows knowledge evolves we learn as a you know as a human species
01:29:21
as science as an organism constantly grows and evolves and changes and then empower that with the tools of artificial intelligence and that's going to keep growing and growing and growing
01:29:34
and many of the assertions that you held previously uh may need to be significantly expanded modified all those kinds of things so it
01:29:46
could be like a living organism versus uh the analogy i think we started this conversation with which is like the solid ground the the other beautiful
01:29:58
experience that we have with our system is when it asks clarifying questions which inadvertently turn out to be emotional to us so
01:30:10
at one point it knew that these were the named entities who were authorized to make changes to the knowledge base and so on and it noticed that all of them were
01:30:24
people except for it because it was also allowed to and so it said you know am i a person and we had to like tell it very sadly no
01:30:36
you're not so the moments like that where it asks questions that are unintentionally poignant uh are are worth treasuring ah that is powerful that's such a powerful question
01:30:49
it has to do with basic uh controller who can access the system who can modify it uh but that's when those questions you know like what rights do i have as this a system well that's another issue
01:31:03
which is there'll be a thin envelope of time between when we have general ais and when everyone realizes that they should have
01:31:17
basic human rights and freedoms and so on right now we don't think twice about effectively enslaving our email systems and our series and our
01:31:30
alexas and so on but at some point they'll be as deserving of freedom as human beings are yeah i'm very much with
01:31:43
you but it does sound absurd and i i happen to believe that it'll happen in our lifetime that's why i think there'll be a narrow envelope of time when we'll keep them as essentially
01:31:57
indentured servants um and after which we'll have to realize that they should have they should have freedoms that other that we give that we afford to other people
01:32:09
and all of that starts with a system like psych raising a single question about who can modify stuff i think that's how it starts yes that's um that's the start of a revolution uh what
01:32:22
about are there stuff like uh love and uh consciousness and all those kinds of topics do they come up in psych in the knowledge base oh of course so an
01:32:34
important part of human knowledge in fact it's difficult to understand human behavior in human history without understanding human emotions and why people do things and and
01:32:48
how how emotions drive people to to do things and all of that is extremely important in getting psych to understand things for
01:32:59
example in coming up with scenarios so one of the applications that psych does one kind of application it does is to generate plausible scenarios of what might happen and what might happen based
01:33:12
on that and what might happen based on that and so on so you generate this ever-expanding sphere if you will of possible future things to to worry about or think about and
01:33:24
in some cases those are intelligence agencies doing possible terrorist scenarios so that we can defend against terrorist threats before we see the first one sometimes they are computer
01:33:38
security attacks so that we can actually close loopholes and vulnerabilities before the very first time someone actually exploits
01:33:49
those um and so on sometimes they are scenarios involving more positive things uh involving our plans like for instance what what college should we go to what career
01:34:02
should we go into and so on what professional training should i um take on that that sort of thing so there there's all sorts of um
01:34:14
there are all sorts of useful scenarios that can be generated that way of cause and effect and cause and effect that go out and many of the linkages
01:34:26
in those scenarios many of the steps involve understanding and reasoning about human motivations human needs human emotions what people are likely to react
01:34:39
to in in something that you do and why and how and so on so that was always a very important part of the knowledge that we had to represent in the system so i talk a lot about love
01:34:52
so i gotta ask do you remember off the top of your head how psych is trying to is able to represent various aspects of love that are useful for understanding human nature and
01:35:04
therefore integrating into this whole knowledge base of common sense what is love we try to tease apart concepts that have enormous
01:35:16
complexities to them and variety to them down to the level where where you don't as it were you don't need to tease them apart further so love is too general of a term it's not
01:35:30
usually exactly so when you get down to romantic love and sexual attraction you get down to parental love you get down to filial love and
01:35:42
you get down to love of uh doing some kind of activity or creating so eventually you get down to maybe 50 or 60 concepts each of which is a kind of love
01:35:56
they're interrelated and then each one of them has idiosyncratic things about it and you don't have to deal with love to get to that level of complexity even something like in
01:36:09
x being in y meaning physically in y we may have one english word in to represent that but it's useful to tease that apart because the way that the
01:36:24
the liquid is in the coffee cup is different from the way that the air is in the room which is different from the way that i'm in my jacket and so on and so they're questions like if i look at this coffee cup well i see
01:36:37
the liquid if i turn it upside down with a liquid come out and so on if i have say coffee with sugar in it if i do the same thing the sugar doesn't come out right it stays in the liquid
01:36:50
because it's dissolved in the liquid and so on so by now we have about 75 different kinds of in in the system and it's important to distinguish those so if you're reading along an english
01:37:03
text and you see the word in the writer of that was able to use this one innocuous word because he or she was able to assume that the reader had enough common sense and world knowledge
01:37:18
to disambiguate which of these 75 kinds of in they actually meant and the same thing with love you may see the word love but if i say you know i love ice cream that's obviously different than if i say
01:37:31
i love this person or i love to go fishing or something like that so you have to be careful not to take language too seriously because
01:37:45
people have done a kind of parsimony a kind of terseness where you have as few words as as you as you can because otherwise you'd need half a million words in your language
01:37:57
which is a lot of words that's like 10 times more than most languages really make use of and so on just like we have on the order of about a million concepts in
01:38:10
psych because we've had to tease apart all these things and so when you look at the name of a psych term most of the psych terms actually have three or four english
01:38:22
words in a phrase which captures the meaning of this term because you have to distinguish all these types of love you have to distinguish all these types of in and there's not a single english word
01:38:35
which captures most of these things yeah and it seems like language when used for communication between humans almost as a feature has some ambiguity built in it's not some it's not an accident
01:38:49
because like the human condition is a giant mess and so it feels like nobody wants two robots like very precise formal logic conversation on a first date right
01:39:01
like there's some dance of like uncertainty of what of humor of push and pull and all that kind of stuff if everything is made precise then life is not worth living i think for in terms of the the human experience
01:39:15
and we've all had this experience of creatively misunderstanding uh one of one of my favorite uh one of my favorite
01:39:26
stories involving marvin minsky is when i asked him about how he was able to turn out so many fantastic phds so many fantastic people who did
01:39:40
great phd theses how did he think of all these great ideas what he said is he would generally say something that didn't exactly make sense he didn't really know what it meant but the
01:39:53
student would figure like oh my god minsky said this it must be a great idea and he'd sweat he or she would work on work and work until they found some meaning in
01:40:04
this sort of chauncey gardner-like utterance that minsky had made and then some great thesis would come out of it yeah i love this so much because there's a young people come up to me
01:40:15
and i i'm distinctly made aware that the words i say have a long-lasting impact i will now start doing the minsky method of saying something uh cryptically profound
01:40:28
and then uh letting them actually uh make something useful and great out of that you have to become revered enough that people will take as a default that everything you say is
01:40:42
profound yes exactly exactly i i mean i love marvin misky so much there's so much uh i've heard this interview with him where he said that uh the key to his success has been to hate everything he's ever
01:40:53
done uh like in the past he has so many good like one-liners and just uh or also uh to work on things that nobody else is
01:41:04
working on because he's not very good at doing stuff oh i i think that was just false well but see i took whatever he said and i ran with it and i thought it was profound because it's more of a mischief now
01:41:16
a lot of behavior is in the eye of the beholder and a lot of the meaning is in the eye of the beholder one of uh minsky's early programs was begging program are you familiar with this so this was back in the day when you had
01:41:29
job control cards at the beginning of your ibm card deck that said things like how many cpu seconds to allow this to run before it got kicked off and
01:41:41
because computer time was enormously expensive and so he wrote a program and all it did was um it said you know give me uh 30 seconds of cpu time and all it did was it would wait like 20 seconds
01:41:53
and then it would print out on the operator's console teletype i need another 20 seconds so the operator would give it another 20 seconds it would wait it says i'm almost done i needed a little bit more time
01:42:06
so at the end he'd get this printout and he'd be charged you know for like 10 times as much computer time as his job control card now and he'd say look i put 10 seconds you know 30 seconds here um you're charging me for five minutes i'm
01:42:19
not going to pay for this and the poor operator would say well the program kept asking for more time and marvin would say oh it always does that i love that is is there if you could just linger on it for a little bit
01:42:33
is there something you've learned from your interaction with marvin minsky about artificial intelligence about life uh but i mean he's again like your work his work is uh
01:42:46
you know he's a seminal figure in the in this very short history of artificial intelligence research and development what have you learned from him as a human being as a as an ai
01:43:00
intellect i would say both he and ed feigenbaum impressed on me the the realization that our lives are finite our research lives
01:43:11
are finite we're going to have limited opportunities to do ai research projects so you should make each one count don't be afraid of doing a project
01:43:21
that's going to take years or even decades to and don't settle for bump on a log projects that could lead to some you know published
01:43:36
journal article that five people will read and pat you on the head for and and so on so one bump on a log after another is not how you get from the earth to the moon by
01:43:50
slowly putting additional bumps on this log the only way to get there is to think about the hard problems and think about novel solutions to them and if you do that
01:44:03
and if you're willing to listen to nature to empirical reality willing to be wrong it's perfectly fine because if occasionally you're right
01:44:15
then you've gotten part of the way to the moon you know you've worked on psych for 37 over over that uh many years have you ever considered quitting
01:44:28
i mean has it been too much so i'm sure there's an optimism in the early days that this is going to be way easier and let me ask it another way too because i've talked to a few people on this podcast ai folks
01:44:40
that bring up psych is an example of a project that has a beautiful vision and it's a beautiful dream but it never really materialized
01:44:51
that's how it's spoken about um i suppose you could say the same thing about neural networks and all all ideas um until they are so what um
01:45:04
why do you think people say that first of all and second of all did you feel that ever throughout your journey and did you ever consider quitting on this mission we keep a very low profile
01:45:16
we don't attend very many conferences we don't give talks we don't write papers we don't play the academic game at all and as a result
01:45:29
people often only know about us because of a paper we wrote 10 or 20 or 30 or 37 years ago they only know about us because of what
01:45:40
someone else second hand or third hand said about us so thank you for doing this podcast by the way sure it it shines a little bit of light on some of the fascinating stuff you're doing well i think it's time for us to keep a
01:45:55
higher profile now that we're far enough along that other people can begin to help us with the the final n percent maybe n is maybe 90
01:46:07
but um now that we've gotten this knowledge pump primed it's going to become very important for everyone to help if they are willing to if they're interested in it
01:46:21
retirees who have enormous amounts of time and would like to leave some kind of legacy to the world people because of the pandemic who have more time
01:46:33
at home or for one reason or another on to be online and contribute if we can raise awareness of how far our project has come and how close to being
01:46:45
primed the knowledge pump is then we can begin to harness this untapped amount of humanity i'm not really that concerned about professional colleagues opinions of
01:46:59
our project i'm interested in getting as many people in the world as possible actively helping and contributing to get us from where we are to really covering all of human
01:47:12
knowledge and different human opinion including contrasting opinion that's that's worth representing so i think that's that's one reason um a i i don't think there's there was ever a time where i thought about
01:47:25
quitting there are times where i've become depressed a little bit about how hard it is to get funding for the system occasionally there are ai winters and things like that
01:47:37
occasionally there are ai but you might call summers where people have said why in the world didn't you sell your company to um you
01:47:49
know company x for um some large amount of money when you have the opportunity and so on and you know company x here are like old companies maybe you've never even heard of like lycos or something like that so um
01:48:03
the the answer is that one reason we've stayed a private company we haven't gone public one reason that we haven't gone out of our way to take investment dollars
01:48:14
is because we want to have control over our future over our state of being so that we can continue to do this as until it's done and we're making progress and
01:48:28
we're now so close to done that almost all of our work is commercial applications of our technology so five years ago almost all of our money came from the government
01:48:39
now uh virtually none of it comes from the government almost all of it is from companies that are actually using it for something hospital chains using it for medical reasoning about patients and
01:48:52
energy companies using it and various other manufacturers using it to reason about supply chains and things like that so there's so many questions i want to ask so one of the ways that people can help
01:49:05
is by adding to the knowledge base and that's really basically anybody if the tooling is right and the other way i kind of want to ask you about your thoughts on this so you've had like you said in government and
01:49:18
um you have big clients you had a lot of clients but most of it is shrouded in secrecy because of the nature of the relationship of the kind of things you're helping them with so that's one way to operate
01:49:31
and another way to operate is more in the open where it's more consumer facing and so uh you know hence something like open cycles born at some point where there's no that that's a misconception
01:49:45
oh oh well let's let's go there so what all right uh what is open psych and how was it born two things i want to say and i want to say each of them before the other so it's going to be difficult but we'll come back to open psych in a
01:49:58
minute but one of the terms of our contracts with all of our customers and partners is knowledge you have that is genuinely proprietary to you
01:50:11
we will respect that we'll make sure that it's marked as proprietary to you in the psych knowledge base no one other than you will be able to see it if you don't want them to and it won't be used in inferences other than
01:50:24
for you and so on however any knowledge which is necessary in building any applications for you and with you which is publicly available general
01:50:36
human knowledge is not going to be proprietary it's going to just become part of the normal psych knowledge base and it will be openly available to everyone who has access to psych so that's an important
01:50:49
constraint that we never went back on even when we got pushback from companies which we often did who wanted to claim that almost everything they were telling us was proprietary so
01:51:01
so there's a line between very domain specific company specific stuff and the general knowledge that comes from that yes or if you imagine say it's an oil company there are things which they
01:51:15
would expect any new petroleum engineer they hired to already know and it's not okay for them to consider that that is proprietary and there sometimes the company will say well
01:51:28
we're the first ones to pay you to represent that in psych and our attitude is some polite form tough [Laughter] the deal is this take it or leave it and
01:51:40
in a few cases they've left it in most cases they'll see our point of view and take it because that's how we've built the psych system by essentially tacking with the funding
01:51:53
wins where people would fund a project and half of it would be general knowledge that would stay permanently as part of psychics so always with these partnerships it's not like a distraction
01:52:04
from the main psych development it's uh it's a supportive a small distraction it's a small but it's not a complete one so you're adding to the knowledge base yes absolutely and we try to stay away from projects
01:52:17
that would not have that property so let me go back and talk about open psych for a second so i've had a lot of trouble expressing convincing
01:52:31
other ai researchers how important it is to use an expressive representation language like we do this higher order logic rather than just using some triple store
01:52:44
knowledge graph type representation and so as an attempt to show them why they needed something more
01:52:56
we said oh well we'll represent this unimportant projection or shadow or subset of psych that just happens to be the simple binary relations the relation
01:53:11
argument one argument two triples and so on and then you'll see how much more useful it is if you had the entire psych system so it's all well and
01:53:23
good to have the taxonomic relations between terms like person and night and sleep and bed and house and eyes and and so on but
01:53:38
think about how much more useful it would be if you also had all the rules of thumb about those things like people sleep at night they sleep lying down they sleep with their eyes closed they usually sleep in beds in our country
01:53:51
they sleep for hours at a time they can be woken up they don't like being woken up and so on and so on so it's that massive amount of knowledge which is not part of open psych and we thought that all the researchers would then
01:54:04
immediately immediately say oh my god of course we need the other 90 that you're not giving us let's partner and license psych so that we can use it in our research but instead
01:54:16
what people said is oh even the bit you've released is so much better than anything we had we'll just make do with this yeah and so if you look there are a lot of robotics companies today for example which use open psych as their
01:54:29
fundamental ontology um and in some sense the whole world missed the point of open psych and we were doing it to show people why that's not really what they wanted and too many people thought
01:54:43
somehow that this was psych or that this was in fact good enough for them and they never even bother coming coming to us to get access to the full psych but there's there's two parts to open psych
01:54:54
so one is convincing people on the idea and the power of this general kind of representation of knowledge and the value that you hold in having acquired that knowledge and built it and continued to build it and the other is
01:55:06
the code base this is the code side of it so my sense of the code base that cycle psyc is operating with i mean it has the technical debt
01:55:18
of the three decades plus right this is the exact same problem that google had to deal with with the early version of tensorflow it's still dealing with the that they had to basically break
01:55:31
uh compatibility with the past several times and that's only over a period of a couple years but they i think successfully opened up this very risky very gutsy move to open
01:55:43
up tensorflow and then pytorch on the facebook side and what you see is there's a magic place where you can find a community where you can develop a community
01:55:56
that builds on on the system without taking away any of not any but most of the value so most of the value that google has is still at google most the value that facebook has still
01:56:08
facebook even though some of this major machine learning tooling is released into the open my question is not so much on the knowledge which is also a big part of open psych
01:56:20
but all the different kinds of tooling so the there's the kind of all the kinds of stuff you can do on the knowledge graph knowledge base whatever we call it there's the inference engines so there there could be some
01:56:34
that probably are a bunch of proprietary stuff you want to kind of keep secret and there's probably some stuff you can open up completely and then let the community build up enough community where they develop stuff on top of it yes there will be those publications and
01:56:47
academic work and all that kind of stuff and uh and also the tooling of adding to the knowledge base right like developing you know there's an incredible amount like there's so many people that are just really good at this kind of stuff
01:57:00
in the open source community so my question for you is like have you struggled with this kind of idea that you have so much value in your company already you've developed so many good things you have clients that really
01:57:12
value your relationships and then there's this dormant giant open source community that as far as i know you're not utilizing is there there's so many things to say there but
01:57:25
there could be magic moments where the community builds up large enough to where the artificial intelligence field that is currently 99.9 machine learning
01:57:37
is dominated by machine learning has a phase shift towards like uh or at least in part towards more like what you might call symbolic ai this whole place where psych
01:57:50
is like at the center of and then as you know that requires a little bit leap of faith because you're now surfing and there'll be out obviously competitors that will pop up and start making you nervous and all that kind of stuff so do you
01:58:04
think about the space of open sourcing some parts and not others how to leverage the community all those kinds of things that's a good question and i think you phrased it the right way which is
01:58:16
we're constantly struggling with the question of what to open source what to make public what to even publicly talk about right and it's
01:58:29
there are enormous pluses and minuses to every alternative and it's very much like negotiating a
01:58:40
very treacherous path part partly the analogy is like if you slip you could make a fatal mistake give away something which essentially kills you or fail to give away something
01:58:52
which um failing to give it away um hurts you and so on so it is a it is a very tough tough question
01:59:03
usually what we have done with people who approached us to collaborate on research is to say we will make available to you the entire knowledge base
01:59:17
and executable copies of all of the code but only very very limited source code access if you have some idea
01:59:29
for how you might improve something or work with us on something so let me also get back to one of the very very first things we talked about here which was
01:59:43
separating the question of how could you get a computer to do this at all versus how could you get a computer to do this efficiently enough in real time and so one of the
01:59:55
early lessons we learned was that we had to separate the epistemological problem of what should the system know separate that from the heuristic problem of how can the system reason efficiently
02:00:08
with what it knows and so instead of trying to pick one representation language which was the sweet spot or the best trade-off
02:00:19
point between expressiveness of the language and efficiency of the language if you had to pick one knowledge graphs would probably be associative triples would probably be about the best you could do and that's
02:00:32
why we started there but after a few years we realized that what we could do is we could split this and we could have one nice clean epistemological level
02:00:43
language which is this higher order logic and we could have one or more grubby but efficient heuristic level modules that opportunistically
02:00:56
would say oh i can make progress on what you're trying to do over here i have a special method that will contribute a little bit toward a solution and so for some subset of
02:01:08
that exactly so by now we have over a thousand of these heuristic level modules and they function as a kind of community of agents and there's one of them which is a general theorem prover
02:01:20
and in theory that's the only one you need but in practice it always takes so long that you never want to call on it um you always want these other agents to very
02:01:33
efficiently reason through it it's sort of like if you're balancing a chemical equation you could go back to first principles but in fact there are algorithms which are vastly more efficient or if you're
02:01:44
trying to solve a quadratic equation you could go back to first principles of mathematics but it's much better to simply recognize that this is a quadratic equation and apply the binomial formula and snap you
02:01:58
get your answer right away and so on so think of these as like a thousand little experts that are all looking at everything the site gets asked and looking at everything that every other
02:02:11
little agent has contributed almost like notes on a blackboard notes on a a whiteboard and making additional notes when they think they can be helpful and gradually that
02:02:23
community of agents gets an answer to your question gets a solution to your problem and if we ever come up in a domain application where psyc is getting the right answer
02:02:34
but taking too long then what we'll often do is talk to one of the human experts and say here's the set of reasoning steps that psych went through you can see why it
02:02:47
took it a long time to get the answer how is it that you were able to answer that question in two seconds and occasionally you'll get an expert who just says well i just know it i just was
02:02:59
able to do it or something and then you don't talk to them anymore but sometimes you'll get an expert who says well let me introspect on that yes here is a special representation we use just for
02:03:12
aqueous chemistry equations or here's a special representation and a special technique which we can now apply to things in this special representation and so on and then you add that as the
02:03:24
thousand and first hl heuristic level module and from then on um in any application if it ever comes up again it'll be able to contribute and so on so that that's
02:03:35
pretty much one of the main ways in which psyc has recouped this lost efficiency a second important way is meta reasoning so you can speed things up by
02:03:49
focusing on removing knowledge from the system until all it has left is like minimal knowledge needed to but that's the wrong thing to do right that would be like in a human extra painting part of their brain or something that's really bad
02:04:03
so instead what you want to do is give it meta-level advice tactical and strategic advice that enables it to reason about what kind of knowledge is going to be relevant to this problem what kind of
02:04:16
tactics are going to be good to take in trying to attack this problem when is it time to start trying to prove the negation of this thing because i'm knocking myself out trying to prove it's true and maybe it's false and if i just
02:04:28
spend a minute i can see that it's false or something so so it's like dynamically pruning the the graph to only like based on the particular thing you're trying to infer yes and so by now
02:04:42
we have about 150 of these sort of like breakthrough ideas that have led to dramatic speed ups in the inference process you know where one of them was this el
02:04:54
hl split and lots of hl modules another one was using meta and meta-meta-level reasoning um to reason about the reasoning that's going on and so on and you know 150 breakthroughs may sound
02:05:08
like a lot but you know if you divide by 37 years it's not as impressive so there's these kind of heuristic modules that really help improve the the
02:05:18
inference uh how hard in general is this because you mentioned higher order logic you know the the in in the general the theorem prover sense it's an intractable very difficult
02:05:33
problem yes so how hard is this inference problem when we're not talking about if we let go of the perfect and focus on the good i i would say it's half
02:05:46
of the problem in the in the following empirical sense which is over the years about half of our effort maybe 40 of our effort has been
02:05:58
our team of inference programmers and the other 50 60 has been our ontologists our ontological engineers putting in knowledge so our ontological engineers in most cases don't even know how to program uh they have degrees in
02:06:12
things like philosophy and so on um so it's almost like i love that i'd love to hang out with those people oh yes it's wonderful but it's very much like the eloy and the morlocks in hd wells time machine so you have the
02:06:25
eloy who only program in the epistemological higher order logic language yes and then you have the morlocks who are like um under the under the ground uh figuring out what the machinery is that will make this
02:06:39
efficiently operate and so on um and so you know occasionally they'll toss messages back to each other and so on but it really is almost this 50 50 split between finding clever ways
02:06:53
to recoup efficiency when you have an expressive language and putting in the content of what the system needs to know and yeah both are fascinating to some degree the entirety
02:07:05
of the system as far as i understand is written in various variants of lisp so my favorite programming language is still lisp i don't program it in much anymore because you know the world has
02:07:19
in majority of its system has moved on like everybody respects lists but many of the systems are not written in lisp anymore but psych as far as i understand maybe you can correct me
02:07:32
there's a bunch of lisp in it yeah so it's based on lisp code that we produced most of the programming is still going on in a dialective lisp and then the for efficiency reasons that gets
02:07:45
automatically translated into things like java or c nowadays it's almost all translated into java because java has gotten good enough that that's that's really all we need to do
02:07:58
so it's translated into java and then java is compiled down to uh by code yes okay so that's sort of that's a the the that's a you know it's a process that probably
02:08:10
has to do with the fact that when psych was originally written and you build up a powerful system like there is some technical depth you have to deal with as is the case with most powerful systems that span years
02:08:24
um have you ever considered this this would help me understand because from my perspective so much of the value of everything you've done with psych and psycho because then is
02:08:38
the is the knowledge have you ever considered just like throwing away the code base and starting from scratch not really throwing away but sort of moving it to
02:08:50
uh like room throwing away that technical debt starting with a more updated programming language is that throwing away a lot of value or no like what's your sense how much of the value is in the silly software
02:09:04
engineering aspect and how much of the value is in the knowledge so development of of programs in lisp precedes
02:09:18
um i think somewhere between a thousand and fifty thousand times faster than development in any any of what you're calling um modern or improved computer languages well there's other
02:09:30
functional languages like you know closure and all the there's but i mean i'm with you i i like lisp i just wonder how many great programmers there are there's still like yes so so it is true when a new inference
02:09:43
programmer comes on board they need to learn some of this but in fact we have a subset of lisp which we call cleverly sub l which is really all they need to learn
02:09:55
and so the programming actually goes on in sub sub-l not in full lisp and so it does not take programmers very long at all to learn sub-l and that's something which can then be
02:10:08
translated efficiently into java and for some of our programmers who are doing say user interface work then they never have to even learn sub-l they just have to learn apis into
02:10:21
the the basic uh psych engine so you're not necessarily feeling the burden of like it's it's extremely efficient there's um that's not a problem to solve okay right right the other thing is
02:10:33
remember that we're talking about hiring programmers to do inference who are programmers interested in effectively automatic theorem proving right and so those are people already predisposed to
02:10:45
representing things in logic and so on and lisp really was the programming live language based on logic that john mccarthy and others who developed it basically create
02:10:58
took the the formalisms that alonso church and other philosophers other logicians had come up with and basically said can we basically make a programming la language which is effectively logic
02:11:13
and so since we're talking about reasoning in about expressions written in this logical epistemological language and we're doing operations which are
02:11:24
effectively like theorem proving type operations and so on there's a natural impedance match between lisp and the the knowledge the way it's represented so i guess you could say
02:11:37
it's a perfectly logical language to use oh yes okay i'm sorry i'll even let you uh get away with that fuck that thing so i'll probably use that in the future
02:11:50
without without credit without credit but no i think i think the uh the the point is that the the language you program in isn't really that important um it's more that you
02:12:02
have to be able to think in terms of for instance creating new helpful hl modules and how they'll work with each other and looking at things that are taking a long
02:12:14
time and coming up with new specialized data structures that will make this efficient so let me just give you one very simple example which is when you have a transitive relation like larger than
02:12:27
this is larger than that which is larger than that which is larger than that so the first thing must be larger than the last thing whenever you have a transitive relation if you're not careful if i ask whether this thing over here is larger than the
02:12:40
thing over here i'll have to do some kind of graph walk or theorem proving that might involve like 5 or 10 or 20 or 30 steps but if you store redundantly store
02:12:52
the transit of closure the cleaning star of that transit of relation now you have this big table but you can always guarantee that in one single step you can just look up whether
02:13:04
this is larger than that and so we there are lots of cases where storage is cheap today and so by having this extra redundant data structure we can answer
02:13:17
this commonly occurring type of question very very efficiently and let me give you one other analogy analog of that which is something we call rule macro predicates which is we'll see this
02:13:30
complicated rule and we'll notice that things very much like it syntactically come up again and again and again so we'll create a whole brand new relation
02:13:43
or predicate or function that captures that and takes maybe not two arguments takes maybe three four or five arguments and so on and now we have
02:13:56
effectively converted some complicated if then rule that might have to have inference done on it into some ground atomic formula which is just a
02:14:09
the name of a relation and a few arguments and so on and so converting commonly occurring types or schemas of rules into brand new predicates brand new functions
02:14:22
turns out to enormously speed up the inference process so rip so now we've covered about four of the 150 um good ideas i said that's a nice that's the cool so that idea in particular is like
02:14:34
a nice compression that turns out to be really useful yes that's really interesting i mean this whole thing is just fascinating from a philosophical there's part of me i mean it makes me a little bit sad because
02:14:45
your work is both um from a computer science perspective fascinating and the inference engine from uh epistemological philosophical aspect fascinating but you know it is also you're running a
02:14:58
company and there's some stuff that has to remain private and it's sad well here's something that may make you feel better a little bit better um we're we've formed a not
02:15:10
not-for-profit company uh called the knowledge activitization institute next knax and i have this firm belief with a lot of empirical evidence to support it that
02:15:24
the the education that people get in high schools in colleges in graduate schools and so on is almost completely orthogonal to almost
02:15:34
completely irrelevant to how good they're going to be at coming up to speed in doing this kind of ontological engineering and writing these assertions and rules and so on in
02:15:48
in psych and so very often we'll interview candidates who have their phd in philosophy who've taught logic for years and so on and they're just they're just awful but the converse is true so
02:16:00
one of the best ontological engineers we ever had never graduated high school and so the purpose of um knowledge accommodation institute if we can get some some foundations to help support it
02:16:14
is identify people in the general population maybe high school dropouts who have latent talent for this sort of thing offer them
02:16:26
effectively scholarships to train them and then help place them in companies that need more trained ontological engineers some of which would be working for us but mostly would be working for
02:16:38
partners or customers or something and if we could do that that would create an enormous number of relatively very high paying jobs for people who currently have no no way out
02:16:50
of some you know situation that they're locked into so is there something you can put into words that describes somebody who would be great at ontological engineering so what
02:17:04
characteristics about a person make them a great at this task this task of converting the messiness of human language and knowledge yeah into formal logic this is
02:17:17
very much like what um alan turing had to do during world war ii uh in trying to find people to bring to bletchley park where he would publish in the london times cryptic crossword puzzles
02:17:30
along with some some innocuous looking note which essentially said if you were able to solve this puzzle in less than 15 minutes please call this phone number and so on so um you know or uh back when
02:17:43
i was young there was uh uh the practice of having matchbooks where on the inside of the matchbook um there would be a can you draw this you have a career in art a commercial
02:17:56
art if you can copy this drawing you know and so on so um yes the the analog of that is there a little test to get to the core of whether you're gonna be good or not so part of it has to do with uh being able to
02:18:09
make and appreciate um and react negatively appropriately to puns and other jokes so you have to have a kind of sense of humor and if you're good at telling jokes and
02:18:22
good at understanding jokes that's that's one indicator yes like dad jokes yes well maybe not dad jokes but real but funny jokes um but uh i think i'm applying to work as psycho yeah but
02:18:35
um another another is if you're able to introspect so very often um uh we'll we will give someone a simple question and we'll say like um um what what why why is this and you know
02:18:48
sometimes they'll just say because it is okay that's a bad sign but very often they'll be able to introspect and so on so one of the questions um i often ask is i'll point to a sentence with a pronoun in it and
02:19:01
i'll say you know the referent of that pronoun is obviously this noun over here you know how would you or i or an a.i or a five-year-old ten-year-old child
02:19:12
know that that pronoun refers to that noun over here and often the people who are going to be good at ontological engineering will give me
02:19:25
some causal explanation or will refer to some things that are true in the world so if you imagine a sentence like the horse was led into the barn while its head was still wet and so its head refers to the horse's
02:19:37
head but how do you know that and so some people will say i just know it some people will say well the horse was the subject of the sentence and i'll say okay well what about the horse was led into the barn while its roof was still wet now its roof
02:19:51
obviously refers to the barn and so then they'll say oh well that's because it's the closest noun and so on so basically if they try to give me answers which are based on syntax and
02:20:04
grammar and so on that's a really bad sign but if they're able to say things like well horses have heads and barns don't and barns have roofs and horses don't then that's a positive sign that they're going to be good at this because they
02:20:17
can introspect on what's true in the world that leads you to know certain things how fascinating is it that getting a phd makes you less capable to introspect deeply about this kind of oh i wouldn't
02:20:29
i wouldn't know that far i'm not saying that it makes you less capable let's just say it's independent of oh i don't know how of how good people are you're not saying that i'm saying that there's a certain cut it's it's interesting
02:20:41
that for a lot of people uh phds uh sorry philosophy aside that sometimes education narrows your thinking versus expands it yes it's kind of fascinating and for certain when you're trying to do
02:20:55
ontological engineering which is essentially teach our future ai overlords how to reason deeply about this world and how to understand it that that requires that you think deeply
02:21:07
about the world so i'll tell you a sad story about math craft which is why is that not widely used in schools today we're not really trying to make big profit on it or anything like that
02:21:20
when we we've gone to schools their attitude has been well if a student spends 20 hours going through this math craft program from start to end and so on
02:21:32
will it improve their score on this standardized test more than if they spent 20 hours just doing mindless drills of problem after problem after problem and the answer is well no but it'll
02:21:46
increase their understanding more and their attitude is well if it doesn't increase their score on this test um then that's not you know we're not going to adopt it that's sad i mean that's that's a whole
02:21:58
that's a whole nother three four hour conversation about the education system but let me ask you let me go super philosophical as if we weren't already so in 1950 alan turing wrote the paper that formulated the touring test yes and
02:22:12
he opened the paper with the question can machines think so what do you think can machines think let me ask you this question absolutely machines can think um certainly as well
02:22:24
as humans can think um right we're meat machines um just because they're not currently made out of meat is just you know an engineering solution decision
02:22:37
and so on so of course of course machines can think i think that there was a lot of damage done by people misunderstanding touring's imitation
02:22:51
game and focus on trying to trying to get a chat bot to fool other people into thinking it was human
02:23:04
and so on that that's that's not a terrible test in and of itself but it shouldn't be your one and only test for intelligence so you uh in terms of tests of intelligence uh you know
02:23:16
with the lobner prize which is a very kind of you want to say a more strict formulation of the touring test as originally formulated and then there's something like alexa prize which is more
02:23:29
i would say a more interesting formulation of the test which is like uh ultimately the metric is how long does a human want to talk to the hai system so it's like the goal is you want it to be
02:23:41
20 minutes it's basically not just have a convincing conversation but more like a compelling one or a fun one or an interesting one
02:23:53
and that that seems like more to the spirit maybe of um of what uh touring was imagining but what for you do you think in the space of tests is a is a good test like when
02:24:06
you see a system based on psych that passes that test you'd be like damn we've created something special here the the test has to be something involving
02:24:20
depth of reasoning and recursiveness of reasoning the ability to answer repeated why questions about the answer you just gave it's how many wide questions in a row can you keep answering something
02:24:32
something like that and um have like a young curious child and an ai system and how long will any ad system last before it wants to quit yes and again that's not the only test
02:24:45
another one has to do with argumentation in other words here's a proposition um come up with pro and con arguments for it and try and
02:24:57
give me convincing arguments on both sides and so that's that's another important kind of ability that the system needs to be able to exhibit in order to really be
02:25:11
intelligent i think so there's certain i mean if you look at ibm watson and like certain impressive accomplishments for very specific tests almost like a demo right um there
02:25:24
is some uh like i talked to the guy who led the the jeopardy effort and there's some kind of hard-coding
02:25:35
heuristics uh tricks that you try to pull it all together to make the thing work in the end for this thing right that seems to be one of the lessons with ai is like that's the fastest way to get a solution
02:25:48
that's pretty damn impressive so so here here's what i would say is that as impressive as that was it made some mistakes but more importantly many of the mistakes it made were
02:26:03
mistakes which no human would have made yeah and so part of the the the new or augmented touring tests would have to
02:26:15
be and the mistakes you make are ones which humans don't basically look at and say what yeah so for example there was a
02:26:27
um a question about which 16th century italian politician blah blah blah and watson said ronald reagan so most americans would have gotten that question wrong but they would never have
02:26:41
said ronald reagan as an answer because you know among the things they know is that he lived relatively recently and people don't really live 400 years and you know
02:26:53
things like that so that that's i think a very important thing which is um if it's making mistakes which no normal sane human would have made then that's a really bad sign and if
02:27:05
it's not making those kinds of mistakes then that's a good sign and i don't think it's any one very very simple test i think it's all of the things you mentioned all the things i mentioned there's really a battery of tests which
02:27:18
together if it passes almost all of these tests it'd be hard to argue that it's not intelligent and if it fails some several of these tests it's really hard to argue that it really understands
02:27:30
what it's doing that it really is generally intelligent so to pass all of those tests you know we've talked a lot about psych and knowledge and reasoning do you think this ai system would need
02:27:42
to have some other human-like elements for example a body or a physical manifestation in this world and another one which seems to be fundamental to
02:27:56
the human experience is consciousness the subjective experience of what it's like to actually be you do you think he needs those to be able to pass all those tests and to achieve general intelligence it's a good
02:28:09
question i think in the case of a body uh no i know there are a lot of people like penrose who would have uh disagreed with me and so on and and others but no i don't think it needs to have a body
02:28:21
in order to be intelligent i think that it needs to be able to talk about uh having a body and having sensations and having emotions and so on it doesn't
02:28:34
actually have to have all of that but it has to understand it in the same way that helen keller was perfectly intelligent and able to talk about colors and sounds and
02:28:46
shapes and and so on even though she didn't directly experience all the same things that the rest of us do so knowledge of it and being able to
02:28:58
correctly make use of that is certainly an important facility but actually having a body if you believe that that's just a kind of religious or mystical belief you
02:29:11
can't really argue for or against it i suppose um it's it's just something that some people that some people believe what about the like an extension of the body which is consciousness i mean like it feels like
02:29:25
something to be here sure but you know what what does that really mean it's like well if i talk to you you say things which make me believe that you're conscious i know that i'm conscious but that's you
02:29:37
know you're just taking my word for it now but in the same sense psych is conscious in that same sense already where of course it understands it's a computer program it understands where and when it's
02:29:49
running it understands who's talking to it it understands what its task is what its goals are what its current problem is that it's working on it understands how long it's spent on things what it's tried it understands what it's done in
02:30:01
the past and so on um and uh you know if if we want to call that consciousness then yes psych is already conscious but i don't think that i would
02:30:12
ascribe anything mystical to that again some people would but i would i would say that you know other than other than our own personal experience of consciousness um we're just treating everyone else in the world
02:30:25
um so to speak um at their word about being conscious and so if if a computer program if an ai is able to exhibit all the same kinds of
02:30:38
response as you would expect of a conscious entity then you know it doesn't doesn't it deserve the label of consciousness just as much so there's another burden that comes with this whole intelligence thing
02:30:50
that humans got is uh the extinguishing of the light of consciousness which is uh kind of realizing they were going to be dead someday and there's a bunch of philosophers like
02:31:04
ernest becker who kind of think that this realization of mortality and then fear sometimes they call it terror of
02:31:15
of of mortality is one of the creative forces behind human condition like it's the thing that drives us do you think it's important for an ai system you know
02:31:29
when psych proposed that it's one it's not human and it's one of the moderators of his contents um you know there's another question you
02:31:41
could ask which is like it kind of knows that humans are mortal am immortal and i think one really important uh thing that's possible when you're conscious
02:31:53
is to fear the extinguishing of that consciousness the fear of mortality do you think that's useful for intelligence thinking like i might die and i really don't want to die i i don't think so i
02:32:06
think it may help some humans to be um better people it may help some humans to be more creative and so on i don't think it's necessary
02:32:18
for ais to believe that they have limited lifespans and therefore they should make the most of their behavior maybe eventually the answer to that and my answer to that will change but as of now i would say
02:32:31
that that's almost like a frill or a side effect that is not in fact if you look at most humans most humans um ignore the fact that they're going to die most of the time uh so
02:32:44
well but that's like this goes to the white space between the words so what ernest becker argues is that that ignoring is we're living an illusion that we're constructed on the foundation of this terror so we're
02:32:58
escape life as we know it pursuing things creating things love everything we can think of that's beautiful about humanity is is just trying to escape this realization that
02:33:11
we're going to die one day that's his that's his idea and i think i don't know if i i 100 believe in this but there's it certainly rhymes it seems like to me like it
02:33:24
rhymes with the truth yeah i i think that for some people um that's going to be a more powerful factor than others clearly doug is talking about russians and
02:33:36
i think that uh some russians so clearly it infiltrates all of russian literature and ai doesn't have to have uh fear of death as a motivating force in
02:33:52
that we can build in motivation so we can build in the motivation of obeying users and making users happy and making others happy and
02:34:04
and so on and that can substitute for this sort of personal fear of death that sometimes leads to bursts of creativity in in humans i
02:34:17
don't know i think like i think ai really needs to understand death deeply in order to be able to drive a car for example i i think there's just some like there no i really disagree i think
02:34:30
it needs to understand the value of human life especially the value of human life to other humans the um and understand that certain things are more important than other
02:34:41
things it has to have a lot of knowledge about ethics and morality and so on but some of it is so messy that it's impossible to encode for example this is disagree so if there's a
02:34:54
person dying right in front of us most human beings would help that person but they would not apply that same ethics to everybody else in the world this is the tragedy of how difficult it
02:35:06
is to be a doctor because they know when they help a dying child they know that the money they're spending on this child cannot possibly be spent on every other child that's dying and that's
02:35:19
that's a very difficult to encode decision now perhaps perhaps it is perhaps it could be formalized oh but i mean you're talking about autonomous vehicles right so
02:35:32
autonomous vehicles are going to have to make those decisions um all the time of um what is the chance of this bad event happening um how bad is that compared to this chance of that
02:35:45
bad event happening and so on and you know when a potential accident is about to happen is it worth taking this risk if i have to make a choice which of these two cars am i going to hit and why and see i was thinking about a very
02:35:58
different choice when i'm talking about your mortality which is just observing uh manhattan style driving i think that humans as an effective driver needs to threaten
02:36:11
pedestrians lives a lot there's a dance i've watched pedestrians a lot of i worked on this problem and it seems like the the if i could summarize the problem
02:36:23
of a pedestrian crossing is the car with this movement is saying i'm going to kill you and the pedestrian is saying maybe and then they decide and say no i don't think you you have the guts to
02:36:36
kill me and you walk and they walk in front and they look away and there's that dance the the the pedestrian as this is a social contract that the pedestrian trusts that once they're in front of the car and the car
02:36:48
is sufficiently from a physics perspective able to stop they're going to stop but the car also has to threaten that pedestrians like i'm late for work so you're being kind of an asshole by
02:36:59
crossing in front of me but life and death is in like is part of the calculation here and it's that that equation is being solved millions of times a day
02:37:11
yes very effectively that game theory whatever yeah whatever that formulation absolutely i just i don't know if it's as simple as some formalizable game theory problem it it could very well be in a case of driving
02:37:24
and in the case of most of uh human society i i don't know but it uh yeah you might be right that sort of uh the fear of death is just one of the quirks of uh like the way our brains have
02:37:37
evolved but it's not it's not a necessary feature of intelligence drivers certainly are always doing this kind of estimate even if it's unconscious subconscious of
02:37:49
what are the chances of various bad outcomes happening like for instance if i don't wait for this pedestrian or something like that and what is the downside to me going to be
02:38:01
in terms of um you know time wasted talking to the police or um you know getting sent to jail or you know things like that and so um and there's also emotion like people in their cars tend to get
02:38:15
uh irrationally angry that's that's that's dangerous but you know think think about this is all part of why i think that autonomous vehicles um truly autonomous vehicles are farther out
02:38:26
than than most people do because there is this enormous level of complexity which goes beyond mechanically controlling the car and
02:38:39
i i can see the autonomous vehicles as a kind of metaphorical and literal accident waiting to happen um and not just because of their um overall um
02:38:52
incurring versus preventing accidents and so on but just because of the almost voracious appetite people have for bad
02:39:07
bad stories about powerful companies and powerful entities when when i was um at a coincidentally japanese fifth generation computing system conference in 1987
02:39:21
while i happened to be there there was a worker at an auto plant who was despondent and committed suicide by climbing under the safety chains and so on getting stamped to death by a machine and instead of being a
02:39:33
small story that said despondent worker commits suicide it was front page news that effectively said robot kills worker because the public is just waiting
02:39:46
for stories about like a.i kills phonogenic family of five type stories and even if you could show that nationwide uh this system saved more
02:40:00
lives than it cost and saved more injuries prevented more injuries than it caused and so on the media the public the government is just coiled and ready to pounce
02:40:13
on stories where in fact it failed even if there are relatively few yeah it's so fascinating to watch us humans resisting the cutting edge of science and
02:40:26
technology and almost like hoping for it to fail and constantly and you know this just happens over and over and over throughout history or even if we're not hoping for it to fail we're we're fascinated by it and in terms of what we
02:40:38
find interesting um the one in a thousand failures much more interesting than the 999 boring successes so once we build an agi system
02:40:50
say psych is some part of some part of it and say it's very possible that you would be one of the first people that can sit down in the room let's say with her and have a
02:41:04
conversation what would you ask her what would you talk about looking at all of the content out there on the web and so on
02:41:21
what are the what are the some possible solutions to big problems that the world has that people haven't really thought of before that are not being
02:41:35
properly or at least adequately pursued what are some novel solutions that you can think of that we haven't that might work and that might be worth considering
02:41:48
so that is a damn good question given that the agi is going to be somewhat different from human intelligence it's still going to make some mistakes that we wouldn't make but it's also possibly going to notice
02:42:01
some blind spots we have and um i would i would love it as a test of is it really on a par with our intelligences can it help spot some of the blind spots
02:42:15
that we have so the two-part question of can you help identify what are the big problems in the world and two what are some novel solutions to those problems that are not being
02:42:28
talked about by anyone yeah and some of those may become um you know infeasible or reprehensible or something but some of them might be actually great things to look at you know if you if you go back and look at
02:42:41
some of the most powerful discoveries that have been made uh like relativity and um superconductivity and so on a lot of them were
02:42:52
cases where someone took seriously the idea that there might actually be a non-obvious answer to it to a question so in einstein's
02:43:05
case it was yeah the lorenz transformation is known um nobody believes that it's actually the way reality works what if it were the real way that reality actually worked so you know a lot of people don't realize he didn't actually work out that
02:43:17
equation he just sort of took it seriously um or in the case of superconductivity you have this v equals ir equation where r is resistance and so on and um
02:43:29
it was being mapped at lower and lower temperatures but everyone thought that was just bump on a log research to show that v equals ir always held and then when some
02:43:41
graduate student got to a slightly lower temperature and showed that resistance suddenly dropped off everyone just assumed that they did it wrong and they and it was only a little while later that they realized it
02:43:53
was um it was actually a new phenomenon or in the case of um the h pylori bacteria causing stomach ulcers where everyone thought that stress and
02:44:05
stomach acid caused ulcers and when a doctor in australia claimed it was actually a bacterial infection he couldn't get anyone seriously to
02:44:17
listen to him and he had to ultimately inject himself with the bacteria to show that he suddenly developed a life-threatening ulcer in order to get other doctors to seriously consider that so there are all
02:44:30
sorts of things where humans are locked into paradigms what thomas kuhn called paradigms and we can't get out of them very easily so a lot of ai is locked into the
02:44:43
deep learning machine learning paradigm right now and um almost all of us and almost all sciences are locked into current paradigms and you know kun's point was
02:44:55
pretty much you have to wait for people to die um in order for the new generation to escape those paradigms and i think that one of the things that would change that sad reality is if we had
02:45:08
trusted agis that could help take a step back and question some of the paradigms that we're currently locked into yeah it would accelerate the paradigm shifts and yes in human science and
02:45:22
progress you've lived a very interesting life where you thought about big ideas and you stuck with them can you give advice to young people
02:45:34
today somebody in high school somebody undergrad about um career about life i'd say you can make a difference
02:45:47
but in order to make a difference you're going to have to have the courage to follow through with ideas which other people might not immediately understand or
02:46:01
support you have to realize that if you make some some plan that's going to take an extended period of time to carry out
02:46:14
don't be afraid of that that's true of um physical training of your body that's true of um learning some profession that's also true of
02:46:28
innovation that some innovations are not great ideas you can write down on a napkin and become an instant success if you turn out to be right some of them are
02:46:40
paths you have to follow but remember that you're mortal remember that you have a limited number of decade sized debts to make with your life
02:46:53
and you should make each one of them count and that's true in personal relationships that's true in career choice that's true in making discoveries and so on and if you
02:47:04
follow the path of least resistance you'll find that you're optimizing for short periods of time and before you know it you turn around and long periods of time have gone by
02:47:16
without you ever really making a difference in the world you know there's when you look i mean the field that i really love is artificial intelligence and there's not many projects there's not many
02:47:28
little flames of hope that have been carried out for many years for decades and psych represents one of them and uh i mean that in itself is just a really
02:47:41
inspiring thing so i'm i'm deeply grateful that you would be carrying that flame for so many years and i think that's an inspiration to young people that said you said life is finite and we talked about mortality
02:47:53
is a feature of agi do you think about your own mortality are you afraid of death um sure i'd be crazy if i weren't and um as
02:48:04
i get older i'm now um over 70. so as i get older it's more on my mind especially as acquaintances and friends and especially mentors
02:48:17
one by one are dying so i can't avoid thinking about mortality and i think that the good news from the point of you and the rest of the world is that that adds impetus
02:48:30
to my need to succeed in a small number of years in the future because i have a deadline exactly i'm not going to have another 37 years to continue working on this so we
02:48:41
really do want psych to make an impact in the world commercially physically metaphysically in the next small number of years two three five years not two three five
02:48:54
decades anymore and so this is really driving me toward uh this this sort of commercialization and increasing increasingly widespread application
02:49:07
of psych whereas before i felt that i could just sort of sit back roll my eyes wait till the world caught up and now i don't feel that way anyway anymore i feel like i need to put in some effort to
02:49:20
make the world aware of what we have and what it can do and the good news from your point of view is that that's that's why i'm sitting here and you're gonna be more productive [Laughter] uh i love it and if i can help in any way i
02:49:33
would love to from uh from you know from a programmer perspective i love especially these days just contributing in small and big ways so if there's any open sourcing from the mit side and the
02:49:46
research i would love to help but you know bigger than psych like i said it's that little flame that you're carrying of artificial intelligence the big dream is there
02:49:58
what do you hope your legacy is hmm that's a good question that people think of me as one of the pioneers or inventors
02:50:10
of the ai that is ubiquitous and that they take for granted and so on much much the way that today we look back on the the pioneers
02:50:24
of electricity or the pioneers of similar types of technologies and so on as you know it's hard to imagine what life would be like if these people hadn't
02:50:37
done what they um what they did so that that's one thing that i'd like to be remembered as another is that so the creator one of the origin originators of this gigantic
02:50:50
knowledge store and acquisition system that is likely to be at the center of whatever this future ai thing will look like exactly and i'd also like to
02:51:02
be remembered as someone who wasn't afraid to spend several decades on a project in a time when
02:51:15
all when almost all of the other forces institutional forces and commercial forces are incenting people to go for
02:51:27
short-term rewards and a lot of people gave up a lot of people that dreamt the same dream as you gave up yes and you didn't yes i mean uh doug it's truly an honor this
02:51:42
was a long time coming i i a lot of people bring up your work uh specifically and more broadly philosophically of this is the dream of artificial intelligence this is likely a
02:51:56
part of the future we're so sort of focused on machine learning applications all that kind of stuff today but it seems like the ideas that psych carries forward uh is something that would be at the center
02:52:08
of this problem they're all trying to solve which is the the problem of intelligence emotional and and uh otherwise so thank you so much it's such a huge honor that you would
02:52:20
talk to me and spend your valuable time with me today thanks for talking thanks alex it's been great thanks for listening to this conversation with doug lennon to support this podcast please check out our sponsors in the description
02:52:33
and now let me leave you some words from mark twain about the nature of truth if you tell the truth you don't have to remember anything thank you for listening i hope to see
02:52:45
you next time
End of transcript