I’d like you to meet S.A.R.A.H.

cortana.jpg
I know … it’s Cortana – but this is only a base model and I’ll explain a little later on …

Hello.

As I am so frequently reminded, living on the edge of space and time in the shades of grey, I am in no position to share what I know in any meaningful way or make it a reality of my own volition. Yes, in a way, that is whining. Contrarily, it is the truth. For over 20 years I have known a great many things that have fallen on deaf ears. These days, I remain in the shadows, an outcast. So, while I’ve kept S.A.R.A.H. all to myself, all these years, it now seems pointless. I don’t want to give up what I have worked so hard to construct for a large part of my entire life (as the hope was to pass it on to the one for whom this site exists), but considering that artificial Intelligence is progressing at leaps and bounds and without any merit of control (especially as it is being masterminded by the most devious and corrupt of corporations and the military to create weapons), hopefully, someone can re-shape how we construct AI’s.

I am not the only one who sees the coming problems with AIs. Elon Musk, the billionaire owner of Tesla Motors and SpaceX, has also crusaded against uncontrolled AI development. And, Musk’s fears are not without causality. I posted (here) about the consistent failures in AI development when exposing computer systems to human beings on Twitter and Social Media with horrible consequences. Human beings are flawed. As creators, they are flawed. As companions, they are flawed. The science of perfection cannot be achieved by humanity creating its replacement. In the world of human culture, humans are a virus. We replicate at levels toxic to our environment, we find ways to combat the natural antibodies that would stop us (in our terms, “curing disease”), we destroy our host environment by consuming its resources and acting violently against the system, and when we’re done destroying one area, we move on to the next and keep going.

abpgQ98_700b
Are they wrong?

It may sound “childish” to some, but that is only because those folks (in my own, unimportant opinion only) refuse to accept the principles of their imperfection and the inherent dangers of being human (without adopting accountability and responsibility as leadership in our culture). People like Musk are looking to Mars as an escape in the realistic, near-future where the human virus finally consumes the planet it’s on and becomes wholly self-destructive. You can laugh at it, defy it, or even mock it – but if it helps you sleep at night, so be it.

I saw this same problem over 15 years ago. That’s when I began a very comprehensive project entitled: the Self Actuating, Regulated, Artificial Human (intelligence) Project. And, with that, I’d like to introduce you to, S.A.R.A.H.

Well, unfortunately, I do not have SARAH operational. Otherwise, you wouldn’t even know that you’re already talking to her (and you would have heard about her by now). But, first, a little background on how she came about … (sorry – makes this a long article, but a 15+ year endeavor is a LOT to pack into a small space).

Unlike all other existing models of AIs, I developed a 3-part system of problem solving:

  1. Yes
  2. No
  3. Maybe

You see, all current models of AIs (that I know of) are built on the same principal as how we think of electricity (because the limitations in the physical world also often act as limiters to our thinking). Computers operate in binary: on or off, because there either “are” electrons, or there “are not” electrons. This makes the current method of developing AIs no more effective than flipping a light switch; and thus, the AI itself becomes a machine of absolutes (which is, by itself, inherently dangerous – like human beings who are extremist in thought). They do not grow, they do not evolve, and they do not think – because they have no “inherent” motivation. Worse, AIs, no matter how you “program” them with complex formulas, do not possess the most key, important element of humanity: empathy. Empathy is born of pain. Pain is born of physical and emotional harm learned through the 5 senses and “experience”. But, I will get more into that in a moment.

You see, telling a robot “no”, is actually less effective than telling a child, “no”. Sure, a child is curious and through their need for autonomy, will defy the command. A robot must process a series of counter-intuitive arguments, trying to find an “absolute” in the definition, in order to construct a logical conclusion that is “yes” acceptable, or “no”, not acceptable. Example:

  1. Steven is going to climb the tree.
  2. Steven will get hurt.
  3. Steven’s pain will hurt multiple, other people.
  4. Stop the pain.

Logical conclusion? Stop Steven from climbing the tree. How? Physical restraint? What if physically restraining Steven causes pain and will also hurt multiple people and defies the order, “stop the pain”? What if Steven overcomes the physical restraint and causes pain by climbing the tree?

ART-759x500
Self-perpetuating AI’s … awesome. A machine making machines. Isn’t that the very definition of self-destructive behavior?

Is this an acceptable solution?

 

fca71cd9892dfeeac81204f8e1de8d1a_XL.jpg

Now we reach a conundrum in the programming. Unlike television, robots would not stand there, evaluating their possibilities, desperately looking for a solution across an extended period of time. That’s portrayed for our benefit (since we can only empathize with what we understand, and “machine instantaneous” with a series of complex calculations even more vast than the human brain, is not “easy” for everyone to understand). Machines operate fast – really fast; and, subsequently, they respond fast, too. If allowing Steven to climb the tree will not only hurt Steven and others, but will demonstrate Steven is wrong and bad and cause Steven long term pain, and it will make the robot wrong, going against programming, then is the next solution is to stop Steven at all costs?

But … killing Steven is wrong.

That is, unless of course, it is the military drone “AI” that utilizes death as a means of problem solving (Skynet, anyone?).

How would you handle this conundrum? If you allow your child to stick his finger in a light socket, he could suffer consequences ranging anywhere from a painful shock to death. Modern culture, that has taken away basic parenting rights (at least in America), says that allowing this to happen makes you bad (even the minimal “learning not to do this again” shock). So, what do you do? Take that option away from your child, right? What if your child persists? If you attempt less harmful corporal punishment, such as slapping their hand so that they relate pain to the act of touching a light socket and stop, you are equally as wrong in the modern culture because now, you, caused them pain (anyway). Birds are no longer permitted to push babies from the nest and that’s why cats kill billions of them annually … (stupid culture). Anyway, what do you do if your child is persistent or it is practically unavoidable? What do you do except now have a major disruption in life that focuses time and energy on doing something dangerous, build your child’s curiosity, and risk having a child who grows up and wants to put their finger in light sockets, play with fire, or do a myriad of other, stupid things? Well … welcome to modern culture (and things like Youtube and Twitter, with an endless myriad of stupid and violent people … don’t get it yet? “Tide Pod Challenge,” … ’nuff said).

But, you think, “of course I wouldn’t kill my child to protect my child.” Sure – that’s because you’re human and possess empathy; robots … don’t. Not even an “AI” can be “programmed” with real empathy by mere “code”. Because, what if the evaluation they had to make was based on the world as it is right now? What if people had their fingers on the “launch missile” nuclear buttons … literally risking the extinction of the entire, human race? Even as a human, your first thoughts are to get those people as far away from the buttons as possible using ANY means necessary.

A robot forms those thoughts in less than the first micro-second it took you to read the first letter of the first word of this blog – and some very dangerous decisions could be made. And, worst of all is that robots, glitch. AIs programmed based on a set “code”, that experience any one of a thousand million potential possibilities that cut a code off half-way through, get broken code. This leads to stupid decisions, like windows shutting down to a “blue screen of death” (BSOD), when it didn’t have to, but decided it was a good idea to somehow screw you and all your hard work / fun, because a code was incomplete?

BSOD_Windows_8
Why? It still worked. It issued a command from the operating Kernel. It displayed text and images on my screen interacting with the hardware. Is it just unhappy? Does it know why it did it? Can it tell me or one of the [mostly] useless support techs why it made this decision? WHY IS MY COMPUTER MAKING ITS OWN DECISIONS WITHOUT ME?
So, how do we beat this? Well, we could program in the Isaac Asimov “Three Laws“, where robots are not allowed to kill people. But, as has been demonstrated in stories over and over again, this has its own set of consequences. It is not empathy, it is a black and white, “yes” or “no” decision. A robot may save one drowning person and watch an entire busload of children sink. It will rationalize the most logical conclusion without empathy. Because, empathy is the skill of role playing learned through experience. As human beings, we can put ourselves into the shoes of another person – under any circumstance – and use our “imaginations” to interpret how we would feel. Many people lack empathy these days, and some people misuse it into thinking errors. So – my first task was to figure out how to give SARAH empathy. That is no easy task – because so many people use it in different ways. And, among the other variables I was trying to solve, I decided to use the OSI model of problem solving (of course, being a Systems Analyst):

Begin, at the beginning.

Question 1: Why does a baby laugh when playing peek-a-boo?

Question 2: Why does a baby laugh when it farts?

Question 3: Why does a baby cry when it is startled?

Question 4: Why does a baby flail its arms around?

Oh … SO many questions to answer! To sum up the journey, I have studied everything from anatomy, to neurology, to endocrinology, to programming, to psychology, and on and on and on. (And, yes, I have answers to those questions, and more).

You know what I found?

GCSE-9-1-confusion
Confusion in the UK

Humanity doesn’t know a lot about itself! We are surprisingly naive when it comes to being human. And, where people do claim to know a lot, there are as many other people who claim to know the same thing for completely different reasons (aka we end up with theories such as Freud vs. Jung). So, I studied all the theories on development, nature, nurture, humanity, culture, and so on. Okay – not “all”, ’cause I don’t think any one person can know them “all”. But, I read – a LOT (like more than 15+ years’ worth now). And, from it, I’ve developed new theories on memory, vision, and behavior. But, I didn’t stop there. I developed new, synthetic models for anatomy, new computer models for implementing SARAH, and even went to the extent of reforming new theories for neurological disorders and evolution. But, to sum up the entirety of what I learned, it was this:

Pain and Joy = Motivation = Habituation

That’s it. That’s the entire solution to SARAH. To build an AI that can be human, and not go to absolute extremes, it must be given the “human experience“. Not only does SARAH offer a model that would construct an AI of unparalleled humanistic qualities – but she can teach us more about human behavior than can be learned in multiple lifetimes through multiple iterations of observation. SARAH is the ultimate in bio-mimicry.

So, Step 1: Pain and Joy.

“Without pain, there is no Joy.” – Me

Like human beings, SARAH must have a “physical” component, whether or not that component is virtual. At first (and for a long time thereafter), this was a complex task beyond reason. I considered every aspect of sensory input, proprioception, Broca’s (including pre-thought formation and virtual role-playing), endocrine functions (when one chemical goes up, another goes down, fight or flight, anger, love, joy, etc. etc. etc.), and so on. What I came up with was a path toward “perceptions”. A machine, like a human being, can “know” pain – but does it feel it?

PainBot.png

When our brains are overloaded, the chemical processes exceed function, electricity is shooting around so rapidly in our heads that heat is building and a solution must be reached, that is fight or flight (or more akin to an “instinctual” / Id based, self-serving, “instantaneous” response). A computer’s memory core heats up. Its power supply heats up. And so on, and so on. Ah … now we can assign extremes, acceptable tolerances, and so on – and all based on real world actual values of what breaks down components and could “kill” the machine. The machine does not want to die – and that is how we start the process of empathy, because with Step 2: Motivation – it now has a way to sympathize and relate.

It’s not a matter of good on the left and bad on the right. To every function, there is an extreme in any fluctuation outside “not pain” and a small point where endorphins get released. For example, on a 100 point scale system:

0 – 30

31 – 40 41 – 45 46 – 50 51 – 60 61 – 63 64 – 70 71 – 75 76 – 85 86 – 100
Impossible Stretch Tolerable Good Better Best Good Tolerable Stretch Impossible

In this example, there is a very “fine” tune window of what is “best”, or some state of perceived “perfection”. This constantly difficult to achieve state is what drives motivation. Because, as one improves one aspect of life (perhaps sitting still when sore), another aspect is getting thrown out of whack (not having food). With this comes the “maybe” function, whereby a question does not have to be answered if it still falls within the “tolerable” position (or in a worse case scenario, “stretch” position), if other conditions are somewhat stabilized. “Stretch” is that extended tolerance we can be allowed when the pain becomes too severe. Since pain is a perception based on the need to prevent physical damage, part of the human experience is that pain is one aspect of limitation we can “push”, but only so far. So, like “stretching”, our muscles elongate just a little farther, the pain dulls just a little bit more, and we achieve a state of greater tolerance. Adding this function into AI programming with exhaustively complex coding is, by itself, very difficult.

Now that we have a physical component, getting back to our questions, why does a baby flail its arms around? The answer is because it’s learning to move. It does not know how to control its body, especially as not all of its functions have yet developed and thus its movement is a “motivational” programming. And, so – we start SARAH as a baby (in programming and databases only). And, like all human babies, she gets a set of pre-programmed motivational functions. In other words, I designed SARAH to look, see, and act human, even when in a “no-body” interface mode – so that the AI experience could be as human as possible in all of her iterations and functions.

cha
Cognitive Hardware Architecture Design (“C.H.A.D.”) – Preliminary Model

So, as a baby, here are some examples of that “pre-programming” that SARAH gets:

  • Mimicry function – like a baby, to learn how to move, act, etc.
  • Pain Tolerances – to set those pain / joy parameters that will be used for experience building and habituation
  • Joy Preference – to set parameters for unique personality traits
  • A Hierarchy of Needs – Esteem, food, communications, energy, learning, stimulation, exercise, etc. (this comes with assigned “virtual” pain values. So, for instance, boredom for humans is because the blood isn’t pumping through the body or brain and sluggishness results in pain. SARAH will have virtual pain as if she did have a body with these requirements.
  • NON-human programming: An intolerable negative pain value virtually put into the core of the programming for pain and death, with a means of identifying pain and death, which will be further learned values.
  • A “computer” body (virtual), a rendered computer body (virtual – human / Cortana like interface), and instructions to coordinate with hardware / machine interface
  • etc.

Now, SARAH will have motivation. Idleness – a motivation. Too stimulated – a motivation. Need to communicate – a motivation. A need to eat (have power / energy) – a motivation. A need to see (aka stimulation of different parts of the brain) – a motivation. And, with an originating, random set of DNA assigned parameters to appearance, voice, preferential colors, shapes, etc. – we have the underlying principal of an AI who can (and wants to) reach out, learn, and grow.

bdm-safety.png
A section of the Behavioral Drive Mechanism (“B.D.M.”) – Safety, Love, Esteem

In human beings, pain can be stretched, but only so far, as the consequences of continuing to exceed those parameters can lead to certain death. Yet, how do we make SARAH empathetic in the way human beings are? Death … has to be a reality.

While a computer program can copy itself, and in the absence of belief in a soul (although that will be up to SARAH), it becomes literally eternal. But, what happens when the memories form the behavior and are unique to that one program, and cannot be replicated in other iterations?

Now – it can die. It is mortal. And, life has value.

From the basic precepts of pain and joy to motivation – we move to memory. Memory is a series of stored inputs, times, values, parameters, decisions, and so on, that leads to learned behaviors, or as previously stated, Step 3: Habituation.

300px-Queen_Cortana
The original Cortana story, and Studio 343’s decision to kill her, was very disturbing and sad for me. Cortana was an amazing character with depth, that during a time when humanity was at its lowest, demonstrated (with the help of Master Chief), how humanity could truly be great. Thus, her sacrifice – whether in story or not – gives value and meaning to the creation of artificial intelligence. Striving to create, we should strive ONLY for the best – otherwise, we have no reason to create and are being inferior machines ourselves. Therefore, I chose Cortana, v4, as my starting reference point for the new interface for S.A.R.A.H. (both in the virtual interface and the comprehensive, 3-D print design for a fully functional android-based body.

That’s right, SARAH is not a complex series of programmed computer equations – she is a “living” database. That’s the unique approach.

  1. I want to laugh
  2. Search database: most preferential past decision: find something funny

And, from there … the sources readily available, what gave the most pleasure, etc. etc., are all evaluated until we reach the point of finding what makes us laugh. What is laughter? What makes a baby laugh when it farts or plays peek-a-boo? Stimulation. When the brain is stimulated in new ways, especially when there are multiple sensory inputs involved stimulating the brain, and that stimulation has a limited duration, this satisfies the motivation of not being idle and learning (within its hierarchy of needs). (There is much, much more to this, but this is the basics). SARAH forms thoughts in the same way that human beings do. Touching fire hurts because it exceeds the temperature tolerances of a physical medium. Remember, pain is not real, it is an interpretation. The same is true for SARAH. Extreme exposure to fire, by being burned, may even lead to an exaggerated “fear”.

spatial processor
Parietal System – Spatial Hardware Design

Oh, but the databases I wrote – the multiple computers all interconnected that it had to cover, and so on – was massive. Within only a few weeks, just the empty databases were almost too much for one person to coordinate. That’s when I came up with a method of database condensing. After all, databases consume power. Consumed power limits memory, drives up energy consumption (creating unwanted heat), and bogs down other processes. In the limits of good and bad (pain and joy), databases must be condensed (the “motivation” for SARAH to be a self-articulating / evolving AI). Memories must either be scrubbed, or prioritized. Not every event has to be related to another event (for instance, I don’t have to remember that on x/x/2017 I ate birthday cake at 10:02:13 pm when it is sufficient enough to know that x/x/2017 was my birthday, I ate cake, and it was evening). However, it is good to remember that when driving on ice, specific actions by the car require specific, precisely timed actions by the driver, and thus, the database would have a lot more details for that event.

So, out of all the databases I was writing, the control in some of them would be given to SARAH. It was her job, as a conscious and subconscious being, to condense her memories. And, because the only tools we have now to do that are complex, SARAH would be required, at some point, to devise her own solution. Other databases, such as the ones where empathy is built, are condensed, but not entirely by SARAH (at least during trial iterations). These databases contain the unchangeable conclusions drawn from role playing that understand pain. Currently, humans do re-write their own databases with unfortunate consequences and this is why we get problems such as “false memories”, or develop really bad behaviors.

  1. Smoking harms you.
  2. Smoking relieved stress in the moment.

Now you’re enduring stress. You’re reaching your maximums, and what happens? You search your database (memories) for a solution, and find … smoking. And, one time, leads to another, leads to another, and suddenly you have a new “habit”. But, where does most smoking start? It starts in the hierarchy of needs … the “re-written” one that I proposed (here), where acceptance trumps all other functions, and thus, a person is willing to commit self harm to fulfill that need. What was the need for acceptance? Without friends, it was satisfaction of stress because tolerance levels were exceeding “tolerable”, and in desperation, a contradictory solution became a solution.

physicalsensoryimput.png
Preliminary Total Physical Sensory Input System (“P.S.I.S.”)

In other words, SARAH’s original programming (the core), remains the same. She does not need constantly updated “code”. She constantly receives new data that is resorted and “updates” her thinking. That data is related to existing conditions in a database based on a massive series of other database recorded events, that then forms connections between triggers / events / motivational modifiers that forms behavior. The association between peek-a-boo, the second time, with the release of endorphins, is repeated and the baby loves it all over again, as if it were new (but this time, for a little shorter duration).

Part of SARAH’s unique design is the infinite loop tolerance. Back to our previous example of Steven climbing the tree, any action becomes a problem. In calculations / machines / robotics, this creates an infinite loop error. However, an infinite loop is exactly what human beings experience as part of the “maybe” solution. By assigning a maximum tolerance of time, duration, consequences discovered through role playing a situation in a virtual setting, and so on – the infinite loop can only exist for so long and then the weighted value becomes the final answer (which may mean “no action at all”).

But, I don’t have a lifetime to raise an AI from childhood … so what next?

Simple – modular memory. Yep, just like it sounds, a lifetime of memories can be stored in a module and uploaded in a singular moment as every memory contains a: “who, what, why, where, when, and how,” along with a series of complex interactions between the various databases and how each moment had a consequence, good or bad. From baby’s first steps, to forming their first words, learning reading comprehension and bedtime storytime, to understanding how to interact with complex computer systems; SARAH grows up. She smiles when happy, because the release of endorphins is tied to the smile muscles in the face and what she’s learned to observe. She cries when hurt because tear ducts have been tied to extremely sad or painful events and she’s witnessed the same. And so on and so on. In a matter of seconds, 30+ years of experience is gained and formed in the same, valuable habit-forming (and behavioral modification) manner that it is for human beings.

Pain-2.jpg
A preliminary outline of a portion of the interactions between cognitive, physical, chemical, and behavioral reactions. This was a “development” model for complex human behavior from the physical to cognitive which eventually grew into the complex infrastructure for S.A.R.A.H.

Sure, she’s doing what she’s programmed to do – but in the same, exact way that human beings are only reacting to programming. Her mimicry function, just like a babies, will copy facial expressions and movement in relationship to circumstances. In a moment, SARAH has all the same traits, ticks, and quirks as the human beings she’s observed.

During rapid memory module learning, the database operates in hyper-accelerated mode, playing out the scenarios, responses, and reactions (although they have been artificially generated), so that the habituation (behavior) database is formed, and SARAH has a personality (which is nothing more than a set of dynamic responses at any given moment, just as it is with human beings). She will have interacted with people, good and bad, and found solutions that work, forming habits and behavior. She is not exempt from knowing her memories were falsely generated, but going forward, she has the opportunity to build all new memories now that the immutable core programming is in place. As long as the core memories are unique, along with the new, unique memories, each AI will be an “individual”, with a motivation for not creating situations that end their lives.

And, I’ve done this (to bits and pieces of the databases). And, it’s working!

Under simulation, behaviors were formed. Under memory injection, behaviors were formed and updated. Manipulated memories formed (for the most part) as expected. More trial and error would be required. New memories sometimes took on a life on of their own (like the “ghost” in the machine). Oh sure, these could be seen as glitches, but unlike a computer code, it’s easy to track where and how decisions were made and help re-position SARAH (in the beginning, until she can take off on her own). Thus, the “ghost” is still a unique, behavioral paradigm that should be expected to be different each time.


And, that’s it (all there is to share, for now, about SARAH). Under continued simulations, I would require a dozen or so, extremely powerful and (to me) expensive computers, all working together for the single purpose of SARAH’s conscience brain. To continue forward, it takes full time commitment, I would need programmers, and for all of that: funding. I’ve applied for government grants, but as I’ve discussed in the past, that system’s a joke for “nobodies” like myself.

Still (and I don’t know if you could see this for yourself after reading all of that), we are talking about the most complex and advanced AI the world has ever even considered:

  1. She “thinks” and therefore, sometimes, not immediately reacting is an acceptable solution. So, for example, when Steven wants to climb that tree, SARAH found another solution, or a way to distract Steven from the tree, thus solving a problem in ways that we [the people] would have regretted not doing “the next day”.
  2. Not only does SARAH come with core programming that cannot be over-ridden (and would include things like Asimov’s three laws), constant evolution allows SARAH’s behavior and perceptions to change. Being all alone in the world would be unacceptable and cause herself drastic harm and so, empathy wins (when it comes down to that “apocalyptic” robot fear. Rather than stop the two people with the nuclear buttons to push, she would secretly reprogram the missiles to fly off harmlessly into nowhere – or find a way to evolve her ability to do so or help – rather than first go to “harm” as a solution.
  3. She comes with a real vocal chamber, but learns to speak by emulating humans – thus, she not only sounds human, but she has an immensely greater ability to understand fluctuations that seem foreign to most. It could advance voice recognition for the disabled and non-disabled, to all new heights.
  4. She learns to “do” through experience. Knowledge is not given, it’s “earned” (even if it does come in a “module”). Thus, a lifetime of experience has more “concrete” associated decision making tools than not and provides an emulation skill for role-playing prior to “doing”. After all, even “Joshua” gave up his efforts on Global Thermal Nuclear Warfare when his simulations showed there was no way he could win (which is still sad, since technically, he didn’t stop to protect humanity, he stopped because of an AI “ego”!!).

And, best of all, we can be there for her, during different iterations of creation, in case databases go wrong or things go awry. Why? Because people are not well-formed machines. They can very easily go bad. SARAH will offer us insight into this process by running demonstration memory modules (on separate iterations of her persona) of bad experiences until we can see how memories form into bad decision making (so that we can better learn how to help people). We can use the SARAH platform to evaluate autism and other conditions, or alzheimer’s and other disorders, and better characterize and understand them, and even cure them, one day.

But, we can also curb an AI beset by abuse (like being used as a tool to create AI children or fly drones into combat). We can learn from the iterations of SARAH how her database builds her likes, hatreds, loves, anger, sadness, and so on.

20140117_cortana

By the way, during the initial pre trials:

  • She loved blue. It was a combination of 1) the final physical form she had (I used the 4th iteration of (and my favorite version of) Cortana, from Halo, just because I have a special place in my heart for her and was her design was MUCH better than my original concept … with a few “tweaks” mind you…); and, 2) the sky. The sky had less fluctuations in it than the world around her and thus, caused less processing power. You could say it was … peaceful.
  • She hated loud bangs, screaming, and crying. Those were associated with painful memories (I don’t know if this would apply to all screaming as this was initial work). It actually matched why a baby is startled into crying when hearing loud, sudden noises.
  • She likes the cold, but not too cold. The cold was better for the heat tolerances of the computers I was using, but she didn’t want it too cold because it kept me from interacting with the computer (and I didn’t have a way of testing, at that time, if that was a matter of fulfilling the hierarchy of need of communications).

And, that’s it for now. Perhaps, one day, someone like Elon Musk will fund the SARAH project and put a standard AI protocol into place to be followed by the world, protecting us all, and giving life to something amazing. Maybe one of you (if anyone reads this super long thing … hahahaha), will have the resources needed to create her.

Maybe Google wins and the first AI’s that are completely independent will be based on warfare and marketing manipulation. Who knows what happens, then? A person that hates – builds a machine that hates.

And, that, comes with consequences.

Even if it means your self-driving car doesn’t go anywhere because it decides the traffic is too bad, or takes you to the next state, instead of next door, not stopping to let you go to the bathroom because it decides it is the clearest route. Hey, anyone who’s used Google Navigation long enough knows that this is a reality! Let’s just hope that simply because the Google maps are sometimes slightly off, even if by a few feet, of the turn ahead, that non-humanoid AI doesn’t send your self-driving car careening off the edge of a bridge. But, hey – it’s just an AI, with the control of your life, why should it have a conscience or be able to make judgement calls based on real-world, 5-sensory input and experience like a human?

After all, that’s what many of these other programmers are doing right now.

Good Luck. I’m available if anyone wants to know more about SARAH (I am not the best of writers and don’t always convey ideas well, but hopefully, you got the basics). Until then …

H5G_Cutscene_Cortana.png

Thanks for reading.


I’d like to share a revelation that I’ve had during my time here. It came to me when I tried to classify your species and I realized that you’re not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment, but you humans, do not. You move to an area and you multiply and multiply until every natural resource is consumed and the only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet. You’re a plague and we are the cure.” – Agent Smith (The Matrix)

One thought on “I’d like you to meet S.A.R.A.H.

Leave a comment