The Perils and Possibilities of AI with Gilman Louie

On this episode of The Adrenaline Zone, Sandy and Sandra delve into the world of artificial intelligence (AI) with guest Gilman Louie. As a former video game designer turned venture capitalist, Gilman brings valuable insights into the current state of AI, including the emergence of Generative AI and algorithm Chat GPT, which can generate text that seems indistinguishable from what a human could produce. Together, our trio touches on the potential risks and benefits associated with AI's continued development, from its ability to revolutionize industries to the possibility of machines falling out of control or being used against us.

Gilman provides expert analysis on the ethical implications of creating machines that can mimic human behavior, emphasizing the importance of investing in the soft sciences, such as sociology and psychology, to better understand the social impacts of these technologies. He also highlights the need for a more thoughtful and coordinated approach to AI implementation, involving academia, governments, and tech companies working together to build safer and more trustworthy systems. With Gilman's experience as a venture capitalist and his involvement in In-Q-Tel, a venture capital firm associated with the Central Intelligence Agency, the episode provides valuable insights into the potential benefits and challenges of AI. Ultimately, his conversation with our hosts here today serves as a reminder of the importance of thoughtful implementation and regulation of emerging technologies to ensure a better future for all.

Resources:

If you enjoyed this episode of The Adrenaline Zone, hit the subscribe button so you never miss another thrilling conversation, and be sure to leave a review to help get the word out to fellow adrenaline junkies.

Transcript:

Sandy Winnefeld: Artificial intelligence recently crossed a threshold of sorts with the advent of something called Generative AI led by an algorithm named ChatGPT, which can rapidly generate text that seems indistinguishable from what a human could put together. 

Dr. Sandra Magnus: While the technology has limitations, it resonates with audiences that were exposed to fictional accounts of how such technology could evolve in harmful ways, such as the Terminator film series.

Sandy Winnefeld: Generative AI's emergence has brought into sharper focus debates over what computers can do and should be allowed to do, and how this capability might be used against us or even fall out of control completely.

Dr. Sandra Magnus: Our guest today, Gilman Louie, has closely watched this technology evolve. He began as a video game designer, then ran the CIA's venture capital arm, and is now a partner in a venture capital firm, Alsap Louie Partners. He's also the CEO of America's Frontier Fund, chairman of the American Federation of Scientists, and former commissioner of the National Security Commission on Artificial Intelligence. 

We talked with Gilman about what the most current forms of AI can do and the risks associated with their continued evolution. 


So, Gilman, welcome to The Adrenaline Zone.

Gilman Louie: Sandra, it's great to be here.

Sandy Winnefeld: Well, you know, we always like to start by asking our guests just a little bit about where they came from. So you had a very circuitous path to where you are today. Pretty interesting path you had. While your degree was in business administration, of all things, your first big success was with video games. So how did you get into producing video games and how's that industry evolved while you've been in it?

Gilman Louie: Well, I started back in the days when video games were on machines like the Atari 2600, the Apple II, the TRS-80, the Radio Shack actually used to make computers, and that's the area that I grew up in. This would be very relevant to you, Sandy. The reason why I got interested in computers was my brother was in the Navy on an aircraft carrier called the USS Ranger. And he was–

Sandy Winnefeld: Oh, so was I.

Gilman Louie: Yes. And so it was a Tiger cruise and I went on to the carrier and we looked at this big UNIVAC machine that basically processed payroll. And I said to my brother, “I think I could program this machine to play games.” And that's how I got my start. So I owe it to the US Navy. That got me my interest in games. But that was 1976 and we had to remember the state of computers back in those days. The idea of having a computer in your home, which this was totally foreign, it would be science fiction, almost Star Trek-like. And as that industry developed with the personal computer, it grew and grew and grew to a point where today it's larger than movies, it's larger than music. A whole generation of players now have lived in a world where video games are more common than television. My daughter doesn't even know what a television is. She just thinks there's a big screen that you attach her Nintendo Switch to. And so I grew up in that era. I was best known for actually simulations of military systems because there's a whole group of video gamers who wanted to know what it was like to fly in an F-16. 

I actually had the Paramount Picture rights to this movie called Top Gun and produced the very first set of video games for Top Gun. Paramount Pictures was actually an investor in my company, could produce another game, brought it in from Russia. There's a whole movie on it right now called Tetris and introduced the Pokemon collectible card game when I was on the board of Wizards of the Coast. So it's been a fascinating experience going from big blocky graphics and tech space video games to where we are today, where it's almost indistinguishable between reality and what's being simulated in these very advanced games. That's not very different than what the military uses today for training and for simulation.

Dr. Sandra Magnus: It's funny, just in the side, I remember Pong, that ping pong game, from your era. But it's to your point that you just made that the fact that the video games are so realistic, is that how your video game success led you to get involved in In-Q-tel, which for our listeners is essentially it seeds the entrepreneurial community with funds to tackle the needs of our nation's intelligence community. So is that how you kind of made that path from video games into seed funding because things were emerging from simulator land into reality?

Gilman Louie: Long story short, what happened was I got a phone call after I sold my company to Hasbro, chief creative officer to Hasbro. And Fortune magazine was doing this kind of adrenaline edition. They had the year before the best golfers, CEO, executive golfers, and the year before that, the best motorcyclists. So they thought it would be great, let's do one on fighter pilots because there are so many fighter pilots in senior executives in these big companies. And so they called Hasbro, they said, “Well, we don't have any fighter pilots, but we do have a video gamer who writes fighter pilot games.” And they said, “Oh, this is going to be great.” Gamer versus fighter pilots. 

They put actually somebody who was in fighter weapons school in the front seat. There was these T-34 Mentors actually in the backseat, and they put me in the front seat. I wasn't allowed to take off because I don't have a pilot's license. But I had enough stick time in simulation time to be able fly the aircraft once the plane is off into the air. And so we did a series of dogfights. The good news for the Americans is I ended up dogfighting a senior executive for a big recruitment firm, I won't say which one. But it turns out that that individual was an A-1 pilot and a current brigadier general in the reserves and flew the F-15. The good news is he ate my lunch. He did a left-left pass, 34 seconds later–

Sandy Winnefeld: You're dead.

Gilman Louie: I was dead. So they had to fight a weapons guy, try to beat him, and he could never get inside the other guy's circle. So you get to see the other pilot smiling. Unfortunately, I won't say which company, but I had to fly against a CEO of a major aircraft manufacturer who was a former F-100 Canadian pilot. And I got a couple of tricks that the other fighter pilots had talked me on the Virginia Guard months earlier. And it only works one time, but I used it and I beat them in the fight. And that person got nothing but grief. Nothing but grief afterwards from the Americans. The next day I got a phone call from the American pilot saying, “Hey, I work for a major recruitment firm and I'm not flying jets and doing dog fighting. Would you come in and interview for this really special job?” And it turns out that special job was for the creation of a venture capital fund for the Central Intelligence Agency. So that's a very long story for a very short question.

Dr. Sandra Magnus: That's a cool story, actually. That's a really cool story.

Sandy Winnefeld: Brought back memories. I'm not going to go against you in a T-34, that's for sure.

Dr. Sandra Magnus: We didn't do dog fighting in T-38.

Sandy Winnefeld: So one of In-Q-Tel's theses, and you're one of the founding founders of In-Q-Tel, which is a really interesting organization, is that venture money can flow easily into an application that not only helps the intelligence community, but it's also scalable and can make a profit in the larger world. So can you give us an example from your early days as an In-Q-Tel leader, how that kind of worked out? Maybe an example?

Gilman Louie: I do have a great example. We had invested in technology that allowed us to take a map and do a 3D train of that particular map and do the world in this kind of 3D environment. It’s a little company called Keyhole done by a bunch of ex-video gamers. And I talked it out to a bunch of VCs and they kind of said, “This is really pretty, truly awesome, but there’s no money to be made and geospatial maps.” Doing maps, there’s no money doing maps. But what was taking place at that point, that was literally a couple of months before the run-up of the Second Gulf War. So I showed it to the folks inside DOD and the Agency and it requires them to had an immediate– Like we really, really need to have this. To the point where the team who was like– We used to have these little icons that showed hospitals and schools, coffee shops, and ATM machines. So we did that and we used it for a particular– I’m sure it’s a military and briefings application, we didn’t have time to change the icons so the company said, “Okay. Don’t bomb the coffee shops but do bomb the ATM machines.” Because we actually have all of Baghbad in 3D at that particular time. That company was acquired by this other company as it was going public. A little company called Google as they were going public. But Keyhole, and Keyhole became Google Earth and Google Maps. 

Dr. Sandra Magnus: Wow.

Gilman Louie: And the same technology that we were using, of course, with our classified data, CNN was using the exact same technology, same company, same knowledge to just show all of what was going on in Baghdad of that same tech. And that’s a perfect example of what we call dual-use. Many many years later that technology ended up into a video game that got spun out of Google and now is a private company where Google and Nintendo’s an investor and they make this little game called Pokemon GO all of the same tech.

Dr. Sandra Magnus: Wow.

Sandy Winnefeld: Amazing. So that is a perfect example of dual-use. 

Gilman Louie: Yeah.

Dr. Sandra Magnus: I'm sure there's millions of stories like that, but these days I know you're continuing to spend a lot of time advising and helping startups. One of the reasons why we wanted to talk to you was about the fact that you also provide advice and thought leadership to the government on the role of artificial intelligence, which is, as you know, a very controversial subject these days. But what got you interested in artificial intelligence?

Gilman Louie: Well, I always say science fiction is a good predictor of what's going to happen in the future. To a point where I was doing this National Academy study in 2008 on disruptive technologies, and I made a joke that actually appeared in one of the reports. I said, “If you haven't seen it on Star Trek, it's probably not worth inventing.” And for those who are– Remember the old campy, Star Trek? Not the new stuff, but the old one, the original version. Captain Kirk was always talking to the computer. It's like, “Computer, can you get me where the Klingons are?” Or, “Computer, can you tell me what's happening on the ship?” And then there was this one episode where the computer got fresh with the captain because a bunch of– Apparently in the backstory, the computer was programmed by a bunch of women aliens who decided that the computers were too boring and needed a personality. So every time Captain Kirk talked to the computer, a female voice would give him a really snide remark about the kind of question or the way he was asking the question. Fast forward, women, where we are today. 

Sandy Winnefeld: Women. 

Gilman Louie: Fast forward to the ChatGPT today, which everybody's talking about. It's controversial and disruptive and exciting and transformational and scary all at the same time. This is the very first time– Artificial intelligence has been around since the 1950s. This is not a new technology, but the technology now has matured enough because of computers, because of the algorithms, and because of cloud computing. We're able to use these new AI techniques to generate what we call a chatbot. And a chatbot, think of it as a computer that's behind messaging, that you could ask the computer question as if you were talking to a person and have a conversation, and it's general. 

There's this old test called the Turing Test in which the rule was if you can put a person in front of a computer and behind a curtain, you had a person and a computer on the other side, and if I could not distinguish between the human and the computer, the computer is thought to have achieved sentient capabilities. This is the first time that we appear to be heading down that path. It's clearly not sentient at this particular point, but you can have a conversation with ChatGPT and some of the other competitors of chat, and it will give you an answer to a point where if you were to ask bar questions, it could pass the bar today, it could pass the general purpose medical examination today. It could score in the top 90% of all SAT scores today.And you could ask it for poetry. You could ask it for poetry in Chinese, and it'll write a poem for you.

Dr. Sandra Magnus: How do those algorithms work? I mean, it's more than just having a monster database that it pulls stuff up out of. There's some sort of relational aspect, right?

Gilman Louie: Yes. In fact, what's amazing is what we have done in AI is we have built a thing called a neural net, which we model after the human brain. How neurons work, we've done electronic versions called neural net, and we call them large language models. We take large amounts of text data and we allow that text data to be processed through this neural net, so the neural net understands the pattern of language. We call them large language models. And so we just feed it the entire Internet, anything that's open source available chat groups, factual information that we get out of news sources, entertainment, what things people are blogging about. And what it's doing underneath is it's getting the statistical relationship between words next to other words and generates a concept. Now, unlike us humans, it really doesn't understand what the words mean, the way we think the way words mean. But because we feed the billions and billions and billions and billions of pieces of documents for it to ingest, it answers questions in a very human-like way. But it's still young and immature because sometimes it answers questions like a nine-year-old or one of my teenagers, which is it will give you a wrong answer with 100% certainty.

Dr. Sandra Magnus: There's some statistics and pattern recognition that are basically underpinning the whole process.

Gilman Louie: In fact, it's generally all statistically related. The analysis, the words, words, relationships, the patterns generated in these neural nets are creating a statistical mapping of the human language. But when you talk about it sounds like a person, it acts like a person.

Dr. Sandra Magnus: It’s like superposition on words.

Sandy Winnefeld: So, Gilman, you mentioned a moment ago something called sentience, which is sort of close to consciousness. And I think we would both agree that it's not sentience yet, but at least one of the developers of Google's LaMDA Program believes it is sentience, which means it experiences itself and it has feelings. He based that on the fact that the only way he could get it to violate its own security protocols was by repeatedly insulting it, which meant to him that it had feelings. How long is it going to be before we actually declare that these things are sentient? Or will they ever be?

Gilman Louie: I think it'll be a while before we get to a place where the machines truly, at some level has a basic understanding of the concepts behind the words and the concepts behind the data that we're feeding it. That we would say as humans that that is nearly the equivalent or something very similar to the way we think and the way we actually make decisions. And so words are just a symbolic representation of the physical world that we use that we can understand. And so just simply feeding a machine all the words that we use doesn't mean it understands it. It just means that it understands the patterns of those words in a very sophisticated kind of way. But we're still a little ways off before we get to that in what we call the singularity, in which a machine actually can be deemed as having some form– It could be a computer form, not a human form, but some form of consciousness.

Sandy Winnefeld: But what about the sentience and the concept of the computer makes a leap between two pieces of material or text that aren't statistically related or have an obvious pattern? That's like an intuitive leap, almost.

Gilman Louie: Yeah. In fact, that's the nature of neural net. So there's a lot of discussion about, well, we need to have explainable AI. I can make an argument that we should have AI that's more explainable, but as of today's basic technology explainable AI is kind of like clean coal. And I tried to explain this to my teenager, who's asking me lots of kind of AI-related questions. And I said, okay, I have an 18-year-old, and she wants to know how the machine made a disconnection between these two concepts. Well, I said, “Well, honey, I can ask you the same question. If I said, how do you make the leap? And you can give me an answer.” But for me to actually follow your pathway of this neural net, we can't follow that particular pathway because of the complexity of the neural net. So I can't drill into somebody's brain to actually understand how that actual decision got made. I can look at it as a black box, all the stuff that went in and all the stuff that came out, but what goes on in the middle, what goes on between your ears, I can't totally understand how– We just don't have enough of an understanding of the computer brain. The same thing is true with AI. We can understand what we feed it, we can kind of guess statistically why it said a certain thing the way it said it, but to actually know with certainty how it got to that conclusion, that doesn't exist in the neural network environment.

Sandy WinnefelD: I'd say it's probably especially hard for teenagers to model that as well. Having been one before, I know where that is.

Gilman Louie: Well, that’s why teenagers are very comfortable with using ChatGPT for their term papers. In fact, they're embracing it. And I got to say, as we wash in the debate the pros and cons of these technologies, the social impact, the economic impact, the national security impact, as we debate that and think about it, we have a whole generation of teenagers and young adults all over the world who are using this today for everyday life. So they're not questioning it, they're just saying, “Okay, it answers some dumb answers on these kinds of things” But I know how to bold technology in a way that gives me the power to do things I couldn't do before.

Sandy Winnefeld: You mentioned something a moment ago that I think is important, and that is answering a question incorrectly with 100% certainty. And I asked ChatGPT to write a short biography of me, and it was okay, but it had a number of very important facts wrong. And so I would think that there's some danger in there where if people start to rely on this thing as, hey, it's the oracle, it's the truth, but in fact, it's wrong in many cases, that could be hazardous, right? What are your thoughts on that, Gilman?

Gilman Louie: Absolutely. I think that's the foundation of some of the debates today. Do we slow down? Do we study it? Do we try to get it right before we move on? And answer is getting it right is going to be really important. But the flip side of that is if we're early in the days of the automobile and it's fundamentally dangerous because there's no seat belts and the wheels are rickety and then it doesn't have brakes and all the problems those early cars had, there's an argument to say that you don't want to slow down. In fact, you want to speed up the development to get to a safer place. Another thing for fighter pilots is you don't want to approach Mach and hang out at Mach. You want to go through that Mach, so you want to get to the other side otherwise, your wings will fall off. But AI is in that kind of delicate position right now. Now, there are things we could do to make the AI better. For example, again, I like to use my teenagers as an example of kind of the state of AI.

Sandy Winnefeld: If she listens to this podcast, you could be in some serious trouble. Go ahead.

Gilman Louie: She listens to all of this. She usually gives me what to say on these particular things. And then my daughter would say, “Look, just because my fellow student says an answer, you got to fact-check it, right?” Literally, you can come up with an answer, but somebody should go back and fact-check to see if that trio is no different than a good editor. AI, in its current implementation, doesn't have good enough fact-checking. It basically has unfiltered output. So whatever kind of appears in its neural net, it's coming onto the screen with very minimal filtering. And so filtering may not be the right strategy. You might need a different kind of more trusted technology to do the fact check. If I write a fact and say, “Sandy, you won the Medal of Honor.” That's what the AI kind of determines. Now, what it's really determining is based on your history and fact pattern, you might have should have won it, but it decided that you won it. And so all we need to do is then put that back to a traditional search engine to determine Sandy never won a Medal of Honor, so edit that line.

Sandy Winnefeld It's interesting you say that because this biography had me bearing great responsibility for the raid that killed Osama Bin Laden. And I had absolutely nothing whatsoever to do with that raid. I was in another command. So you're right, it is susceptible to that kind of thing. And I would imagine you can seed it with faulty information that it reaches down and grabs.

Gilman Louie: It's called poisoning. And so one of the areas of concern around AI is another algorithm, or a group of people could try to poison either the data or by asking specific leading questions for it to draw these leaps that basically give down a false path. And so that's a whole area of AI research that's going to be really important to mature. If I was to go forward ten years and kind of look at where AI will be, AI safety, AI security will be no different than Cybersecurity is today. It will be as big of an industry, if not a bigger industry than we have on Cybersecurity because so many systems are going to be driven by these AI algorithms. That AI safety, that AI security, that AI trust, and confidence will lead a whole new industry to be built around this series of technologies.


Dr. Sandra Magnus: So we talked about poisoning, and we talked a little bit about trust, but verify what other risks have to be mitigated. Now, while we're in the infancy stage and perhaps long term beyond the protection of the data that you just talked about.

Gilman Louie: Well, I think industry, government, academia, research all have a role in making AIs more robust, more trustworthy, more reliable, and more accurate. If I look at it on the government side, what the government should start thinking about doing is setting up what’s the ethical framework in which we build these autonomous systems: What should they be connected to? What should be their rules of the road? We have product safety as a part of the culture within our government ecosystems. Like if a plane crashes, we have a whole Bureau, the NTSB, that goes out that looks for why it fails. We need structural systems for the use of these AIs. So if a car using an autonomous algorithm crashes into the road, currently 

NTSB will go there. But we’re using general-purpose AI and we’re hooking up general-purpose AI to control a lot of systems and things go wrong. We don’t have a group of experts who can go in and examine post facto what are the lessons learned and then disseminate that industry. 

Sandy Winnefeld: Interesting.

Gilman Louie: And then industry itself, I’m not a big believer in government solving all problems, but industry also has to set up its own policies. So industries can set up standards, what’s the appropriate standard of testing? What’s the appropriate standard of verification? Is there a UL good health-keeping seal of approval that meets this particular set of requirements? And then academia fairly needs to focus in on some of these areas that are young and immature to make the systems more robust. So I’m thinking we’re smart rather than saying should we use it or not, usage should be stalled out or sped up. Each of these different groups of interested stakeholders could actually contribute to a more robust and reliable system. In subways, and this is a very important point, it’s very hard to apply these technologies and not be  flat the social values of people who create these technologies. It’s nearly impossible because you train it, you put the guard rails up on it, you have your DNA, your social imprints are in it.

So, the question is the democracy. Do we want the DNA to be fundamentally grounded in our view of how information should be curated, used, and applied? Or do we believe that an authority in state should have control over that? So you can see how these technologies could easily be used in a way that immediately grasps a completely different social imprint that we would find acceptable here in the US. So I think there’s a huge value that discusses it, economic value, and national security value. For the US to actually lead the way in the West on how these technologies should be built and implemented and tested and certified. 

Sandy Winnefeld: Yeah, you brought up authoritarian governments. There's a little bit of a sinister national security side to this, where we in the West see AI as something whose development should be understood, maybe managed a little bit, where other nations, China and Russia, aren't going to have that kind of constraint. They'd be only too happy to inject it in some pretty serious capabilities like information warfare, weapons systems, and the like. So do you see our more thoughtful and restrained approach as a disadvantage vis-a-vis our adversaries, or is the moral high ground the way to go here?

Gilman Louie: I think the US led the way on putting out the first set of ethical frameworks for any military in the world, 3009, and then a bunch of other ethical frameworks because we're grounded in law. The military is grounded in a structural view of how laws are implemented within the framework of our Constitution. So there was a whole discussion of was it appropriate for the department to have ethical frameworks for us on the AI commission. I was a member of the National Security Commission for Artificial Intelligence. It was a two-and-a-half-year commission with best technologists in the world, best scientists in the world, at least in the US, getting together, looking at what the rest of the world doing, and make specific recommendations, not just to the Defense Department, but the academia, the state, federal industry. We kind of covered the whole span. And what we said is, if the US doesn't put out the ethical framework, particularly if lethal weapons are involved then we stand a peril of allowing other countries to set that standard. 

We remember back in the kind of 2019 framework, we had our competitors or adversaries field testing AI-driven machines without that framework. And that's a very dangerous place for the world to be in that there is no decision making, no man in the loop, on the loop, woman looking over like, what are these systems? Who's responsible? Who's in charge? And we took the first step. And then what was interesting is that all the other nations started to follow. And that's what US leadership really means on these particular areas because particularly as we get into autonomous systems, whether it's an autonomous sensor who's flying over to Ukraine, a smart piece of munitions, what determines intelligence versus non-intelligence? I would generally say in the context of warfare, you want systems to be smarter, not dumber. Dumb systems actually do a lot more harm than smarter systems. And so we had to be very thoughtful when we implement these systems, when we design these systems, and we feel that and when we decide to use them.

Dr. Sandra Magnus: So switching gears a little bit, but on a similar note, we talked earlier about social impacts. The fact that social media came so fast and it perhaps is stressing society's ability to adapt and there's unintended consequences. So you can see the same thing potentially happening with AI. Do you have any thoughts about the future we face there in that realm?

Gilman Louie: Back then, in 1984, we thought as technologists that if we could build out and connect up the world and that we can make information freely available and let people self-assemble the democratic values, the truth will be set free. And what happened, in spite of all the benefits that the Internet and these technologies and there's many, many benefits, we did not foresee the dark areas that got created. And so ultimately in many areas of society, we created 1984 with these technologies. And so I think, on the next go-round, how do we make sure that we're much more thoughtful, not just the technology, when we build these systems, when we build the economic models. Like if the economics is clicking right and we fundamentally change news as a public responsibility to the framework in which we need people to click in order to generate revenues for news services, we change the nature of how news is presented. Particularly if the thing that gets clicked are the things that always represent the most extreme views. As you apply AI on top of this now, which is this you're supercharging these systems. How do we supercharge these systems in a responsible way? It's more than just TikTok. TikTok has AI underneath it that generates algorithms that make you want to click more and stay more engaged. We would use the word engage. Others may use the word addiction into the system. So one of the things that we have not invested in is in the soft sciences. The tech companies all want to have the better engineer, the better AI computer scientist where she's off creating the next set of algorithms, all that's important. But where's this sociologist, where's NSF's investment in the soft sciences that is an important framing of how these technologies get implemented? 

And so while we're very biased towards STEM in the hard sciences, we need to really kind of rethink the soft sciences because the soft sciences is really what represents what's going on between our two ears, that neural net. And when you put the power of a computer neural net against human neural nets and that interplay, that could have a lot of positive balances. The interaction between the human and machine creating things that could not be created without that machine-human teaming, but also could create all sorts of social ramifications, of addiction, of more fragmentation, that could easily happen on the other side. And for the tech companies, tech companies kind of want the government to go away. Let us do what we need to do, we'll sort it all out. But the tech companies aren't serious about this. 

AI can become the next GMO. We're already seeing countries like Italy and Europe who's banning ChatGPT. And other nations are thinking about regulatory environments. There will be a disaster for the development of these technologies in any way, whether it's responsible or not responsible. So companies have not just a moral obligation, They have a business obligation to step up to the table, build safer systems, more trustworthy systems, understand the social impacts, and do it with thought and coordinate and educate and get academia and governments involved rather than push.

Dr. Sandra Magnus: Gilman, we're getting to the end here. So as we close this session, is there anything else or one particular thing that you want people to remember as a takeaway concerning the subject of AI?

Gilman Louie: AI will affect us in our lives in so many different profound ways, from how we work, how we live, to how we play, to how we interact. It's the first set of technologies that cut across all of society horizontally, clearly could be used for good. We're going to discover new drugs. We're going to prove health facts, we're going to discover answers to questions that have puzzled mankind for thousands of years. And with the help of AI, it's going to go off and solve a lot of these problems. On the other side, it will have large impacts from a social point of view that would be destructive, displacing, in many ways, threatening. And so we need to kind of go very thoughtfully into this next decade, and we need to really think very hard about how we apply these technologies and how do we put up the appropriate guardrails. 

But even on the good uses, the positive uses, this is the first time a technology will threaten large troughs of white-collar jobs. Millions of white-collar jobs can easily be replaced by the next generation of this technology. And so many of the jobs we value today are legal jobs or accounting jobs. Our tech jobs, our programming jobs, our engineering jobs might go the way of the typing pool and the tellers. And so while we'll create a bunch of new jobs, we'll have a huge displacement on others. So from a social fabric, we need to not only produce the safety net, we need to help in that transition. So it's done so in a most constructive way that we can reap the benefits without necessarily suffering from all of the potential challenges that we have in AI. So we better think about it.

Sandy Winnefeld: It's certainly been a disruptive century so far, hasn't it? With COVID having people working from home and threatening an entire commercial real estate industry to this? And I would imagine it's hard enough to figure out how to deal with the applications of the AI we know. And the advances in AI that are coming are going to make it even harder, and we can't even think about what those are. That's pretty amazing.

Gilman Louie: I would turn again back to the sociologists, to the science fiction writers, to the novelists, those people who spend their careers understanding the human spirit. Many of the basic principles that we're arguing for AI– Isaac Asimov wrote about it in the 40s when it was just totally science fiction. We need to draw on those bodies of literature as a thought experiment that's been going on close to 70, 80 years, and draw on that body and say, what is in the body of literature that will help us make this transition smoother and more productive.

Sandy Winnefeld: Well said. And, boy, Gilman, this has been a fascinating discussion. We could go on a lot longer because I got a ton of questions, but I'll see you here soon. But thanks so much for spending time with us. I know you're very busy. You've got a lot going on through your various numerous activities, and we really appreciate you spending time with us. You shed a little bit of light on this very difficult subject for our listeners and for us, too. So thanks so much for joining us.

Dr. Sandra Magnus: Thank you very much. It’s very interesting.

Gilman Louie: Thank you. Thank you, Sandra. Thank you, Sandy.

Dr. Sandra Magnus: That was entrepreneur and expert on emerging technologies and artificial intelligence, Gilman Louie. I'm Sandra Magnus.

Sandy Winnefeld: And I'm Sandy Winnefeld. Check us out on social media. We're all over the place. And we'll see you next week for another episode of The Adrenaline Zone.

Previous
Previous

Dr. Deep Sea’s Oceanic Adventures with Dr. Joe Dituri

Next
Next

High Sea Hazards with CAPT Cynthia Robson