Less noise, more data. Get the biggest data report on software developer careers in South Africa.

Dev Report mobile

Ashi Krishnan, Apollo: Robots and Their Future in Our World

23 December 2019, by Jomiro Eming

Robots, machines, artificial intelligence... These are becoming more and more prevalent in our world today, and their "future" feels closer than ever. Ashi Krishnan is a senior software engineer at Apollo, but has given numerous talks around the world on robots, neural-mapping, and how they learn to learn like we do.

Ashi sat down with us and chatted around some of things that make machines really scary, and really exciting. Here's what she thinks about the world with robots, and what she means when she says "technology is a mirror."

Watch the full-length studio interview at the end of the post!

Ashi-Krishnan_Ai-robot-with-earth-globe-head_inner-article-02

J: What was your journey into tech and software development? What did that look like for you? Were your parents in tech, or was it something you sort of fell into, or stumbled upon?

A: They were both scientists, but neither of them are really in technology. I did have a Commodore 64 when I was very little. My dad showed me how to program. My first program printed, "I love you, Mom," in random colours in an infinite loop. I was about six years old when I wrote that, or really, when he showed me how to write that.

Then I kept... like, we had a book of some kind, a software manual, and I kept copying programs out of it. Then, I would tweak them a little bit. That's kind of how I started. I've been doing it for a while. I think it has really shaped my thinking processes.

J: In what way?

A: It's very easy for me to see the patterns in things. Which, maybe it's very easy for me to see certain kinds of structural patterns in things, particularly anything serialised - so, stories, actually. It's like I grasp narrative structure very easily.

What else? Languages you would think might be easy for me, and they aren't particularly… The grammar is easy, the vocabulary is not. In programming, there's not much vocabulary, actually. It's like the languages themselves have 10 words, maybe 15 words, and then there's a certain degree of… you become conversant in the jargon. I actually have not been very conversant in the jargon.

I remember going to university, and I was doing these projects and homework. The instructors would be like, "You're using the composite pattern," or something. I'm like, "Yeah, sure, that sounds like that's a word for it."

J: Now you're working at Apollo as a senior software engineer.

A: Mm-hmm.

J: You've described Apollo, or what you do to Apollo as weaving the data of the world, or the GraphQL folks.

A: Yeah, weaving the data graph. That's kind of what we're trying to do.

J: For someone who doesn't know what Apollo is all about, can you give a brief idea of what you do there?

A: Yeah, so we do GraphQL. I think we're probably the biggest GraphQL server and client. Many people use our server and use the client on the front end. I'm on the open source team, and specifically I'm working on server.

Right now, we're working on the next generation of server. I'm kind of starting to own federation and figuring out how we ... Federation where is if you have a bunch of GraphQL services, you sort of stitch them together. We don't call it that. You weave them together. There was a previous version of this technology called schema stitching, which was more cumbersome, so we don't call it that anymore. You can expose a single service made out of a whole bunch of other services. We're figuring out how to do streaming, and live queries, and whatnot, especially in a federated context, which is interesting. It is a very interesting problem.

J: Yeah, I can imagine. Sounds like a fun problem to solve, as well.

A: Yeah.

J: So, you're a senior software engineer by day. Outside of that, I'd like to say you're quite a prolific speaker, just because I haven't seen a list quite as long as yours.

A: Yeah, that's true, actually. I've given a lot of talks this year. I've written a lot of talks - I've written too many talks. I did four of them in a year, and they're not PowerPoint presentations. Each one is its own presentation framework.

J: And each one has its own audience. You can't just treat every talk the same, because you've got this different group of people, so it's going to shift slightly with each one.

A: Yes, but also, I've given Learning for Machines many, many times, for example. It's slightly different. I give it in Amsterdam, and Learning for Machines is… it's about psychedelics, and computational neuroscience, and machine learning - it's that intersection. When I give it in Amsterdam, the bit about psychedelics, I talked about mushrooms in addition to acid.

A lot of them, because they're whole productions, right? To varying degrees, they're scripted. They change a bit. They change a little each time, but they also don't change that much. I don't know if I should advertise that, but you can tell… They're poems. You can tweak them, but you can't change the whole thing every time, you know?

J: I'm interested, because you've obviously got this passion for AI - and we'll go into it a bit later, defining some of the terms within AI, just for the sake of talking from a common place... Could you describe the moment when you had that realisation of, "Oh my gosh, this is interesting. I want to dive head-first into this."

A: Love is a very strong term - I can't conjure a moment, but certainly in my last year of teaching, which was a few years ago now, I just started to get the sense that my job was not long for this world, and it was basically going to become increasingly the case that - this is going to sound terrible, but - my students' jobs were going to become at least much less difficult to do, and there would be many, many more people who could in and just be like, "Yeah, I'll throw together a lot of front-end programming.”

A lot of web programming these days is kind of plumbing. It's like, "I'm going to attach this to this, and this to this, and this to this." That's the kind of thing where you could, if you can figure out how to articulate the constraints, you could have a machine learning system learn how to do it.

Even if you can't figure that out, just all the web flow, and all of these no-code things, you can just point and drag, click all the connections together. It's always been about that last 5 or 10%, where it's like, "It's broken, and now nobody knows what's going on," right? Now, you need someone who actually knows how the thing works.

I think we're getting to where you don't need that, or it breaks infrequently enough that that's a rare problem. I got the sense, and at the time, I was most concerned about machine learning. I was like, I better know this, if I'm going to stay relevant.

I didn't have any reason to learn it, so I went and pitched a talk to JSConf EU about it. Then, they accepted it, so it was like, "Okay, now I have to learn machine learning."

J: Pretty good forcing function.

A: Yeah. It's like, okay, deep learning in JavaScript. I said I was going to talk about this. I guess I need to understand what's going on. I went, and I understood what's going on. I took Google's machine learning crash course, and I explained it. I spent 40 minutes explaining it, and going through some examples in JavaScript.

Then, for Learning for Machines, I've always been very interested in cognition, and so, I kept exploring the structural similarities between machine learning systems, specifically image-recognizers and our own visual systems.

J: Maybe this is a good place to get to speaking from a common place. When it comes to machine learning and deep learning - and I guess just pretend like you're explaining it to five year-old me - how would you distinguish between the two? Maybe uses or impacts are good differentiators?

A: Right, right. AI doesn't mean anything at all, basically. Machine learning is like this whole suite of algorithms that extract meaning from data, or they code data in some particular way. It's basically, instead of providing instructions, you provide a goal function or a fitness function. Whatever machine learning algorithm you're using goes and figures out an optimal… or like, it successively improves upon ways to fit that.

With unsupervised learning, the function is something like, “I want to find all the categories of things within this dataset, where categories, things within a category are closer to other things in that category than they are to everything else.” It goes and figures out the categories. It doesn't name them.

To name them, you would need to associate words with structures somehow. For that, you need training data, which means you need to have a corpus of text, and you need to have a system go and, say, try and assign names to things, and then be like, "That's wrong. That's right. That's wrong. That's in between right and wrong. That's this bad." That's supervised learning, where you have labeled examples, and you either assign loss to them, so you're like, "Okay, this guess was this wrong," or you have some sort of scoring feedback mechanism.

J: Cool. That's that gradient loss that we've touched on before, too.

A: Yeah. So with them, any number of stochastic gradient descent techniques, you have some kind of landscape that describes how well all of the different models you could be doing are doing against this dataset, and you just tweak them, and try and get it to be better.

J: Yeah. As it rolls down that hill, it gets more and more accurate, as it starts getting better at knowing what was wrong before.

A: Yeah. It gets more and more accurate with regards to the data you're training it on, which can be a problem, because if you train it perfectly against the training dataset, it's probably not generalizing very well.

J: I think, for me, at least, the field of machine and the realm of robots is like… Well, we often see videos of Boston Dynamics, and the kind of stuff they're pushing out.

A: The dogs doing back-flips.

J: Dog doing back-flips, bipedal robots with incredible balance on two robotic legs also doing flips, and running...

A: I mean, it's impressive. It's not that great, right? Gymnasts do much better. Gymnasts don't weigh like 300 pounds, but, I mean, I bet there's some animal that does better, right, yeah? That animal can be programmed.

J: Yeah. I guess that speaks to the potential. I'd be keen to hear what your thoughts are. I think we can speculate a lot on which direction that goes, but with those kinds of things specifically, where's the line between that kind of spectacle shifting from good to bad, or bad to good? Like doing good and dangerous? I feel like that line is really hard to draw, at the moment, which I think is why it's a little bit scary, I guess, that kind of tech.

A: Yeah. I mean, it's reasonable to look at that, especially in the context of it being DARPA-funded. What are they going to use it for? Sniffing out bombs. Sure, I'm sure that is part of what they'll use it for, right?

I'm sure they're also looking at, like, how can we send this in armed, or with bombs on it, into cities, and do precision targeting, right? I'm absolutely positive I am not the first person to think of this or the first person think that this was in the Pentagon. That's scary. Is it scarier than dropping bombs on people from drones? Is it scarier than sending people in with guns?

I mean, it's maybe more accessible than sending people in with guns, like if you send in a suicide robot, that's more acceptable to the US population than sending in suicide bombers would be. If you just don't call it a suicide mission, then it's fine, right? I think like all technologies, technologies enable us to do things.

They give us more hands. Most technologies get used to do terrible things in addition to good things, unless you take the view that literally everything - our presence on this planet - is bad, which is a defensible view.

But also, it's very anthropocentric, I think. Yeah. I think everything we make is going to do things across whatever ethical spectrum you care about. We are already doing many terrible things.

Yes, technology will make those worse. We're doing some good things, and technology will make those better.

J: I guess with robots and machines... Even if we can say they can learn from themselves, and make their own decisions to a degree, they are still created by a person. What you're saying about the ethics is that some kind of ethics or morality - and those are pretty deep topics too, I think that alone is complicated - whoever has built the robot for a purpose, has built the machine for a purpose, some of their morality, ethics, and values will map onto that machine.

The person has a particular set of agenda for that robot to do a thing. The only reason it's doing the thing is because a human said, "I need it to do this thing."

A: Right, so it lets us amplify our own value sets, which is why a lot of technology seems horrifying, and why we are increasingly sliding to the cyberpunk dystopia.

First of all, access to technology is not even... When I said technology will enable good and bad things, yes, in principle, I think; but in practice, who has access to it? Who has access to it is who controls the resources? People who control resources are better at playing this game of capitalism, which seems to favour sociopaths, which, as you look at CEOs, and you go higher in executive suites, it's going to be more people who were willing to kill others and eat them to get there. If not literally, then figuratively.

That means that, yeah, the people who get to program those robots are likely articulating values that don't lead to a world that I want to live in. Again, the robots are… they're a problem, maybe, but are you familiar with the paper clip problem?

J: Nope.

A: This came up in Dissecting the Robot Brain, which is the talk I gave at MERGE. There's this famous thought experiment in AI ethics called the paperclip problem, which is... Say, you have a very, very powerful AI: You can give it any problem to optimise, and it has access to huge resources, like maybe it's a nano-tech goo or something on top of the world.

So, you set it to some totally benign task like, “Make more paper clips.” If it's given no other goals, it's just going to kill everyone and destroy every ecosystem to produce paper clips, which - yeah, sounds bad, and people get very in their heads about it, and they're like, "We need to make friendly AI."

And it's like, fine, yes, that's a conceivable problem, but if you replace paper clips with shareholder value, then you have described Amazon's entire business model.

Do we think that Amazon wouldn't destroy the Amazon if it would make them money? No. They don't destroy the Amazon because it wouldn't make them money, right now, yet. That's the problem, right?

The problem is that we've created incentive systems that lead to that. We haven't adequately controlled it, in exactly the way that we are abstractly concerned about AI, because what is the corporation but an AI? It's something that optimises for some goal. It's made of people, but so what?

The idea is that maybe people will be able to exercise judgment and control the thing, but clearly, that's not true. Clearly, the only people in a position to control the thing won't control the thing.

J: Right. I think, maybe we have a - not a skewed, but a slightly incomplete idea of AI, in general, that it's kind of already always been around...

A: It has, yeah. I mean, I think there are lots of systems which are not human, but do information processing, and come to some kind of homeostasis with their environments, and reproduce, and - what are the other criteria for life? Yeah, those things.

So, maybe not come to homestasis, but they have an internal and an external state. That includes societies follow all those states, languages follow all of these criteria. Capitalism, it's like, market systems, value systems follow these criteria, and they are processes that run on our brains. They’re process is given tenancy on an architecture that includes many minds. Yeah, it you can describe them as, they're nonhuman intelligences, certainly.

J: This raises an interesting point now, or thought. If that is true, that AI has kind of already always been around in some way, shape, or form, and we seem to be kind of OK with using drones to drop bombs and not be overly afraid of that...

A: I mean, we're not afraid of it because no one's dropping drone bombs on our heads, right?

I think people for whom that's likely have a reasonable fear of it, yeah.

J: So is it maybe that these robots and machines are starting to work like we do, and even look like we do, in some cases. Now, for the first time, we're coming face to face with something that looks like us, potentially thinks like us. We've been pretty good at controlling humans with laws, and wars, and society, but with these robots, we don't have an idea of what the capabilities are.

A: I think you're right that they kind of look like us, and that is “squicky”, especially the kind of part. If they looked exactly like us, that wouldn't really be “squicky”. People would bet over it pretty fast.

J: Yeah, and they haven't been able to make it look exactly like us, either. They've tried with some robots, and like, can see that's fake.

A: If they looked stylishly not like us, that would also be fine. The uncanny valley is real. That's why Boston Dynamics robots… I mean, that's probably not the only reason, but I'm sure one of them is that that's why they're quadrupedal and not in any way trying to look human.

They do kind of succeed, in that sense. People are freaked out by them because it's a robot dog, scary thing, but not so much creepy, and it might be smart.

Yes, I think some of it is that visceral reaction. I don't know that we should privilege that more than more considered reactions.

J: You have this, I guess, perception or view that robots are still pretty dumb, or still dumber than us. Could you just talk through what you mean by that?

A: I mean, aren't they? Have you ever met a robot, like really? They're not as impressive as the Boston Dynamics robots are, they're way more limited than a six year-old, right? There's some things they can do that a six year-old cannot. They can carry hundreds of pounds, I'm sure. There's something impressive.

A six year-old can do all the very impressive things. They can jump. They can navigate stairs. Some of them can do back-flips. They can balance. Once we get to be like 14 - oh my god! - or adults. It's probably in your 20s, or maybe your late teens, something like that. You can do all these incredible things, and you can think about it while you're doing it, and you can make decisions, and you can come up with long-term plans.

No systems we have are really even close to that. Part of that is that it's very hard, but part of it is also that we're not really, really trying to make general AI. This is the term for something that can do all the human things. It's like the, what is that, hard general AI?

There isn't nearly so much money in it as solving particular classes of problems. I think it's because if you can identify a class of work, like driving a car, which itself is not easy, but it's tantalisingly close. Okay, we know you can build a highway hierarchy and do this long-term planning. The individual, the moment-to-moment bits, it's like.., Well, we have image recognisers. We have like 5,000 kinds of sensor packages. Surely, we can make a car that doesn't run too many people over.

Honestly, it could run quite a lot of people over before it's as bad as people driving cars, so maybe we can leverage that. It's a tantalising problem, but it's still a very hard problem. We're still not even remotely there. It's still very easy for a company that doesn't value safety to go and just mow down pedestrians, like some companies we could name. (Uber...)

Where's the money in an AI that will replace the work of executives? I mean, you could save some money on all those executive pay packages, right, but who's going to approve that? Not the executives. I think that's kind of it, right?

Replacing artists - there's so many artists in the world. Why would you need to replace them?

J: I mean, they obviously still have their limits in what they - I don't want to say what they can do, because they probably could, as you said, do those things - but maybe what they should do, or we want them to do.

A: Well, there's what we are willing to pay to figure out how to get them to do. Right now, it seems like we're much more willing to pay to get them to do particular, well-defined tasks.

Also, that's where the research is. We can do particular, very well-defined tasks, where we can identify the goal function and be able to articulate it. There's this one level of machine learning, where you articulate the goal function, and we have all these algorithms you can apply to try and basically build a system that can optimise for that goal function.

I think the next level is, you describe a meta-goal function, and it figures out what all the smaller goal functions are that need to be aggregated in order to build this thing, which you can kind of argue deep learning networks do.

Each layer is sort of ending a train to extract certain patterns from the signal. Then, together, the signal correlates, but doing that across longer time periods, longer streaming datasets, and being able to have something that thinks and reacts over time, and follows a hierarchy of goals, we're not there yet. We're not really even that close to that.

J: The point you made about being willing to pay for these robots to be able to do things: Automation can move people away from doing mundane, repeatable, sequenced work, and replace them with a machine. Even though factories might be very willing to pay for that, because it's a labour cost, the people who are actually doing those jobs probably wouldn't be willing to pay for it.

A: They might pay for a robot to do their job instead of them, but they still get paid for it, but they're already getting paid basically the smallest amount that they possibly could, like barely enough to live on, so they can't give half their salary to a machine.

Obviously, the factory can give half their salary to a machine, and happily will. The main reason they have not done that, in many cases, is that you cannot make a machine that does that.

Fabric, for example. This is one of the classic examples. Every five years, some tech person is like, "I know! We'll make a machine that can stitch fabric and do everything.” Fabric is so hard to work with. It's obvious once you've ever worked with it - which I think no one who has this idea has. It folds on itself, it does funky things, it's irregular. It's every single thing a machine is very bad at, and particularly robots are bad at dealing with, and metal will f*** it up sometimes. Skin, basically, is very good at touching fabric without damaging it.

J: Which is a scary thought, that that might be the way they fix the robots - to give them skin.

A: Yeah, I mean, I think they can use silicone for skin. I mean, growing human skin and putting it on robots is like the least scary thing.

J: Okay, good to know, because that freaks me out.

A: I mean, if it helps, the skin would need a lot of substrate and stuff to really survive, so you would probably have to periodically be replacing it.

J: I guess it's stem cells, it's not harvested, so maybe it's not so morbid as I have it in my mind.

A: Yeah, no, what'll be creepy is the used skin gloves.

J: I mean, I think those limits are interesting, because I think even within those, there are impacts you don't necessarily always think of. Even if we're replacing factory workers, to actually use machine learning - and I think you mentioned this as well - as a way to help people learn a new job.

A: Yeah, if you can use it to increase the capacity of teaching, that's still very problematic, right? Because it's like, maybe they don't want a new job. On the other hand, if their current jobs suck, maybe they're actually happier in a new job. It's complicated.

J: Another thing that we spoke about last time, which I've seen come up with machine learning, is that - and you alluded to it before - unsupervised learning needs training sets or some sort of data, right, I think?

A: No, supervised learning needs training sets.

J: Right, supervised learning, and when it comes to bias in machine learning, it's always like, “Okay, what dataset was that machine fed, and was that dataset skewed?” Can machines then be prejudiced in some way, or is it the datasets they were fed and who fed them that dataset that was prejudiced?

A: A particular model can certainly be prejudiced.

It's like, the gradient descent algorithm, I think, is not. It would be very hard for me to see how it could be biased in any of the ways we think of bias against particular people or groups, because we really are just interchangeable members, at the level of those datasets.

The gradient descent algorithm itself is not going to be biased, but if you feed it a biased dataset, it will of course then learn those biases - just like we do, actually.

J: Are there any kinds of considerations we can start thinking about now to avoid those cases where a machine potentially creates bias? It seems really hard to, because you can feed it all the data in the world, and then be like, "It's completely objective. We fed it literally every single bite or bit or data that we have."

A: Well, right - but that doesn't make it objective, right? That means it will exactly recapitulate all of the biases that exist.

First of all, you have to not think of the output model as unbiased in some sense. You have to think of it as representative. In this way, I think it's actually very helpful to have it, because you now have something… You have basically an oracle, you can be like, "How would these sample of job applicants fare?", if it's a job application thing.

You could be like, “Okay, well, we've tested millions of examples, and there's systemic bias, clearly, in this company's hiring practices, which is actually a good reason to argue for these models having to be publicly accessible, or at least accessible to researchers or some kind of oversight board.” From that perspective, it's a valuable research tool. It can be valuable for advocacy organisations.

There's some model being used for sentencing in some US states, and this thing has to be illegal. That's going to make its way to the Supreme Court. Then, the people sitting on the Supreme Court are going to decide that, which is not exactly reassuring right now.

There's this risk that we will see, that people - especially people who don't exactly know what's going on - will say, "Okay, well, it's a machine doing it, so it's objective," but I think it needs to never be a machine doing it.

It's like, someone is deciding to use that thing, like the judge is deciding to use this sentencing thing. If the judge's decisions are biased because they're using the sentencing thing, it doesn't matter why. The judge's decisions are still biased.

The flip side of that is, you have to be able to talk about it at a higher level, right? Because if you give this program to 1000 judges across the county, the county is like, "You have to use this," or, "We strongly encourage judges to use this," then you can't go after each individual judge. It's not going to happen. You have to sue the county at some higher level, which I guess the SLU or someone is probably doing right now. I don't know how that particular case is shaping out.

J: Do you know roughly how that sentencing model works?

A: I have absolutely no idea. My guess is it goes and looks at previous people who have been sentenced, and they've trained a model on that, which, obviously, that's going to be biased. It's going to be extremely biased, right?

J: I mean, that sounds like something that should be open-source, or transparent.

A: That definitely should be, right? You can make the argument that Google's secret sauce algorithm shouldn't be, that those probably should at least be accessible in some sense. That one definitely should be. Our voting machines should probably not be machines. Those should be open-source, and they are not.

There's lots of things that should be open and auditable that are not because, again, the people who are incentivised to keep them closed have more money, because they were willing to do whatever it took to get more money, right?

J: I think it's going to take a while before the fields of machine learning and deep learning are more accessible to a large group of people. I think tech is kind of accessible, in general: You can pick up coding, you can become a software developer.

But for people in low-income states or low-income households, to really be able to plug into machines and robots, and not just be at the mercy of the people who have the money and are using them, that’s going to take a while.

A: I mean, I think we need a very, very different way of allocating resources in general, right? How else do you solve it, really.

If you decide - okay, so Jeff Bezos: If we figure out that if we shoot a laser beam through his head at this particular angle, he'll have empathy again, and he starts giving away billions of dollars to buy computers for everyone... Now, there's this huge demand on computers, which increases mining activity. Computers are not great for the environment themselves to be made. We destroy the world in that way.

It needs to be a more holistic shift in how resources are both extracted and allocated, and how we value not just people, and not just things, but the whole system that people and things exist within.

I'm very into the “just shoot a laser right through their head, and if it doesn't work... Worth a try.”

J: Even if we manage to figure out a way of not using robots for bad and only doing good, there's still some sort of weirdly dystopian irony in using these robots to then help.

It's like, people don't have access to the resources, and we should be distributing resources better, not taking all of their resources away saying, "No, but we're going to build this really good thing that's going to help you, so we need to mine the hell out of your country, take out all these things to put these robots and communities together," but then we still need to get those third evolution products back to them.

A: Right. There's a term for this, it's like a dynamic of colonialism. Yeah, definitely.

J: I think the neuro mapping you've done is really fascinating, but out of the projects you've worked with, and the research you've done, the talks you've given, what has been something that you've really come to realise about machines, and robots, and the potential and impact of that for?

“I think they're mirrors, really. I think programming is a kind of cognitive mirror.”

You are assembling concepts together, and your idea of concepts, and the machine that’s evaluating them against much, much more data than you have access to is finding and going down all of these paths, and reflecting back to you the implications.

That's why I think programming can be almost addictive. People get into a flow very easily, because you're thinking, you're expressing, and you're getting feedback exactly as fast as it's happening. As fast as you're feeding it, it's going to feed you responses.

That lets you shape these cognitive structures very effectively, and also shape larger cognitive structures than you can hold in your head - like, much, much larger ones, and different ones than we've been able to construct through any other mechanism, right? Narratives tend to be linear; programs are not linear, right? They're made of linear sections in places, but they're interwoven tapestries.

They're more complicated than most other things we make. They're certainly more complicated than most other things we have drawn up plans for. They're maybe not more complicated than cities, but the sense in which we have planned cities is very different than the sense in which we plan programs.

Then, they're also mirrors in the sense that we're like, "God, AI is going to cause all of these terrifying problems. Robots are going to cause all of these terrifying problems." It's a ton of apocalyptic fiction: In the future, we're going to live in this crap-sack world where you can't drink the water, where the air is unbreathable, where corporations run everything and they run with impunity - that is the world. That is the world that most people live in right now. It's just not the world that the white people who wrote those books live in.

So, it sounds horrible. Maybe they're right that, yeah, it's like the US and western Europe are going to get there, too, but that's not the problem, right? The problem is happening now.

Similarly, we look at robots, and we look at computers, and we are creeped out, and we think this terrible thing is going to happen, when actually, this terrible thing is happening now.

J: Where do you see robots and machines fitting into that, now? I guess we can call it a utopian picture, but where do you see that place of -

A: What's the hope I have?

J: Yeah.

A: After I just said we should shoot everyone in the head with lasers?

Yeah, so I'm writing a book. It takes place over the next thousand years. I think it's fairly realistic, as realistic as anything that takes place over the next thousand years possibly could be.

It's bad, it's bad, it's bad, and then, it's actually quite good. The turning points end up being biotechnology, which makes it easier to distribute large-scale production, or actually, really easier to distribute small-scale production, ecologically-preserving production.

So, if you can plant a town, or if you can design a moss which anyone can grow, which you can spray, and it creates a wall, or if you can come up with a powdery-gel-E. coli thing that eats plastic and becomes a moldable putty that you can make other things out of, and then you'd throw it back in the vat when you want to redigest it - that starts to let people shape their world, and shape their world in a way that will sometimes be just as destructive as now, but has a much greater potential to last, because it comes to homeostasis. It comes to an agreement with the ecosystem that it's in. That lets parts of the biome survive and thrive, basically, while capitalism is doing capitalism's thing, and doing horrible stuff.

J: It's that idea of, you know, robots and machines are not scary. It's the people who are currently designing them are scary.

A: Right. And on top of that, we use the fact that machines are mirrors, and we use the fact that we can shape them to capture our thoughts... If you have enough insight into the firing patterns of your brain, you can train something to replicate them. You can train a model to be like, “Okay, if this is the stimulus, then this is what will be happening in there. Even if I don't know what that means, I can know what it is.”

From that, you can start to extract what it means. And from that, you can start to extract what different emotions mean, or what different values mean. If we meditate on compassion, and we observe that meditation, we can - not just answer it within ourselves, and try and write it down, as people have been doing for thousands and thousands of years - we can also create programs that can actually be compassionate and weave them into the world.

That is scary, because we could give power to those and actually be like, "Okay, we trust you, because now, you are not just friendly AI; you actually think better than us. You are compassionate."

That's really the last time we'll have a say in the matter, right? And that might be necessary, in a sense.

At some point, there will be a last time humans have a say in the matter, one way or another. Maybe that's the way we should go.

J: It's almost like something to that nature is on its way.

A: We're not going to be here forever, right?

So, being able to - in some spiritual sense - birth a child/species that can better exemplify our values than we can, and might even take care of us in our old age, that is probably the nicest end we could hope for.

J: I think that fear that you were talking about - that training robots to be compassionate, and handing that trust over - I think you said it before: It's not necessarily a fear of the end of the world, but it's that fear of standing at the edge of a cliff, which I really like. That was a super poetic way of describing it.

A: I've described something as the fear of standing at the edge of a cliff, and the fear isn't that you are going to fall. It's that in the next moment, you might decide to jump, which I think is from The Unbearable Lightness of Being. I don't know if I would describe robots that way - or maybe it is because the things we might decide to do with them are sometimes pretty terrible.

J: Your book, though: Is there a date when you're hoping to have it done?

A: God, no. There is no date. It's going to be at least a year.

J: They can just look out for Ashi Krishnan and see.

A: Yeah, yeah. I might start releasing pieces of it sooner than that, also.

J: Sweet. I mean, otherwise, you've got ashi.io and you're also on Twitter quite regularly, right?

A: Yeah, I'm on Twitter, @rakshesha.

J: Awesome. Thank you so much for such an insightful conversation. That was really cool.

A: Thank you so much.

J: What's interesting about these kinds of conversations is we could probably have these every week, and they'll be completely different...

A: Yeah, that's true!

J: …just with the speed at which things are happening. I'm excited, even if we have another one next year, or the year after.

A: I would love that. I really hope I can come to the next MERGE…!


Recent posts

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.