Turing Adventure working again! New version just released

Some time ago the chatbot server stopped working, and Turing Adventure with it.
I have taken advantage of the situation and changed the way in which the game communicates with the server, releasing a new version. Outages like this one should not happen again in the future with this new version. If you want to play the game again, or you want to show it to someone, go ahead and download it again.

Now that I was at it, I took the opportunity to throw some love to the game. The intro is now controllable by the user and don’t jump to the game automatically.

New intro interface

In the prison scene, I have solved some bugs, tweaked the interface and added some small features. For example, the chat bubbles now have distinctive styles and a busy animation, and; the description bubble is now visible when you are chatting with a robot. I think this will help newcomers a lot.

Busy animation

I am still working on a full featured game when I don’t have a more pressing project (I work as a freelance), but I’m not going very fast. I’m trying to figure out a more efficient way to work in the game, and coming back from time to time to the jam version to test new features is an interesting option.

You can download Turing Adventure from Game Jolt:

Implementing Turing Adventure: First steps

After months of studying and planning, I’ve started building a complete game based on Turing Adventure, the game I developed for the last Adventure Jam, that was awarded with the Colossal Leap Award.

Me yesterday afer planting the first "stone" of the new Turing Adventure

Me yesterday after planting the first «stone» of the new Turing Adventure

It’s been almost a year since that, but I hope that all the preparations that I did will ease the process.

This is what I can tell so far:

It will be released episodically

There will be three episodes. The first one will have a size that I feel confident that I can finish. Hopefully it will serve as an experience builder, so I can correct mistakes that I could fall into, and enhance those features that people enjoy more. Also, I intend to invest the money that I might earn in the following episodes.

Distribution

The interface will be designed for mouse and keyboard but it will be also touch friendly. It will go to all stores that I manage to put the game into. That means the Windows Store for desktop, maybe itch.io, and mobile stores as soon as I figure out the best way to write your inputs in mobile in a nice way (maybe voice recognition). I will apply for Green Light at the same time.

It will be always on

This is something that carry certain stigma for some users, especially after the the Xbox One launch, but there are strong reasons for that:

  • First and foremost, because of some license issues related to some software I’m using, I cannot release the chatbot components with the game. But if I keep those component on a server instead of releasing it, there are no such issues.
  • I might include a voice recognition system, so you will be able to talk to the game instead of writing. All modern systems nowadays are online services.

Incidentally, this comes with others advantages:

  • In order to make the robots react better to human interaction, I need to read conversations with players (with the players’ permission). New learned behavior would be available instantly for all the players.
  • Chatbots technologies usually requires a lot of computer power. Not good for mobile platforms or low-end PC’s. Also, all reliable chatbot technologies that I know are server based.

Low poly aesthetics, hand-drawn pictures

I will try to pursue a low poly aesthetic to achieve a modern look that fits with the story. It will feature some hand drawn art to complement the story telling, although how much will depend on how much I will be able to afford.

A small kick-starter?

I don’t like this idea because of the amount of effort that it requires, that would be detracted from the game development effort. But I might need the help of some professionals to finish the game properly, and I might not be able to pay them. A small kick-starter could be a solution for that.

I’m planning the development in three stages:

First, I am building gray levels for the first episode, with some basic interaction, to show the game to friends and colleges and get the most feedback on the story as possible.

Then I will build a complete version of the game but release it as a beta, or any kind of early access. I expect robots of this version to be very stupid, despite of my best efforts, and that the game will be almost unplayable. But I hope to improve this situation in a relative short period of time, thanks to the users’ feedback and conversation logs.

After that, I will upload the first polished version of the game to the stores, where hopes will meet reality 🙂 .

Designing Turing Adventure Part Four: AIML: The technology behind Turing Adventure

This is Part Four of an ongoing series of post describing the design of Turing Adventure. You can try the game here.

Futurama, how a robot works

Technically, Turing Adventure is possible using chatbot technologies. In contrast to other complex artificial intelligence models, chatbots are based on some very simple principles that were first formulated during the ‘60s, and experienced a resurgence the end of the 90’s – beginning of the century.

They are based on pattern matching: for a certain input, the chatbot is programmed to return some predefined output. This is what is called a rule. Patter matching were the underlying mechanism of the conversational adventures that precedes the graphic adventures, where you had to type some input that matches a pattern to advance in the story. Usually the pattern to match is defined with regular expressions, so through the use of wildcards (i.e.: a symbol that matches any word), a given output could serve as a reply for several inputs. In this way, you do not need then to type exactly what the programmer wanted, nor has the programmer to predefine an endless list of possible input for each option.

Reductionism

However, with chatbots you can do something that it was impossible with conversational adventures: chat naturally. With the latter, you had limited vocabulary and grammar options, which led conversations to be similar to typing commands. This later evolved nicely to a command based graphic interface (starting the graphic adventure genre), like Lucas Arts games using scumm.

Mistery house and maniac mansion examples

Command based conversational interfaces (left: Mistery House) eventualy evolved to command based graphic interfaces (right: Maniac Mansión)

The natural chat capabilities of chatbots are achieved with a technique called reductionism. This technique consist in matching the user input, but instead of returning an answer to the user, feed it again to the system, modified. This new input should have an equivalent semantic, but it can contains less social etiquette, some orthographic corrections, or more simple grammar. In other words, the transformed input should mean the same, but be shorter, or, reduced. We are transforming a potentially complex input in a simpler-to-match one. This process is repeated until we reach the simplest sentence possible but with the same meaning of the original input given by the user.

If we have a corpus of rules that manage reductionism in the abstract, reducing inputs before treating them, we need to write answers only for those simpler ones. If the user inputs a line that is complex, or that the developer didn’t think about, it would be simplified to a line for which the developer has written an answer.

With a large enough and properly defined corpus of reduction rules, chatbots can provide answers to questions that the programmer of the chatbot had never considered.

It is impossible for the developer think in advance of anything that the user can write. It is then through reductionism how the user is able to speak naturally with the chatbot with good results.

An example

Let’s say that we are adding knowledge to our chat about what is Star Wars. When we ask it What is Star Wars?, the robot may reply: Star wars is a classical Hero’s Journey movie that stands out because of the robots that appear on it.

We can write a rule that represent this dialog line. But what about others ways of asking about Star Wars? For example, if the user says: Excuse me, can you please tell me what do you know about Star Wars? We shouldn’t need to take care of that, nor of any other way to ask about Star Wars. Reductionism should simplify the user input to what is star wars?

What follows is a series of reductions that could simplify the user input in this example (* will work here as a wildcard, matching one or more words):

Original user input: Excuse me, can you please tell me what do you know about Star Wars?

Matched pattern: Excuse me *

Starting a sentence, excuse me only has a phatic and/or courtesy function. We can discard it and feedback the rest of the sentence.

Feedback input: can you please tell me what do you know about Star Wars?

Matched pattern: Can you please *

Again, please is a particle that add no relevant semantic to this sentence. We can chop it and feedback.

Feedback input: can you tell me what do you know about Star Wars?

Matched pattern: can you tell me *

In a sentence with this structure, can you don’t really add any semantic: all of the relevant semantic is carried by tell me. So again, it can be chopped.

Feedback input: tell me what do you know about Star Wars?

Matched pattern: tell me what *

Tell me can really carry some semantic, but in this case, we can discard that, because of the what that follows, which indicates that a question is starting there. We can discard, then tell me. Again, we would not discard tell me in other inputs where it is not followed by an interrogative pronoun and some more words afterwards.

Feedback input: what do you know about Star Wars?

Matched pattern: what do you know about *

All the words in this pattern has meaning, so we cannot really discard some to simplify the question. However, there is a simpler way, with fewer words, to make the same question. So we change the sentence altogether and feed it back to the system:

Feedback input: What is Star Wars?

At this point, the rule we wrote before would match, and the chatbot would reply to the user with the programmed answer.

This can seem rather complicated, but again, we have to consider that the reduction rules has been written beforehand. We only added one rule, and the corpus of reduction rules has sequentially simplify the input to the most basic one we considered. It would work the same way about questing about other topics. It would even work with bits of information that the user teach the chatbots (some of them are able to learn new knowledge from the user – e.g.: Mitsuku).

You can try asking Alice several questions about star wars or other topics here: http://alice.pandorabots.com/

You can also try with Mitsuku. Try to teach her something, and then ask her about it in different ways: http://www.mitsuku.com/

Other techniques

Chatbots can manage default information like its name, its hobbies and so on, and use them in the conversation. That is very helpful to show off the bot personality. But they can also keep track on the context of the conversation. They can remember information that the user provides, like her name, gender and other, and they can use it in later in the conversation. This gives the user a strong feeling of being understood.

They can also answer differently to a given input depending on the topic of the conversation. If, for example, we ask the chatbot, Do you like it?, the chatbot can decide whether it may refers to, for example, ice cream, flowers, or anything else, based on previous interactions, and give a personalized answer for the first two, and a generic answer for the rest of topics.

They can also provide different answers to the same input depending on a previous answer provided by the chatbot. A nice example of that is how to react to a yes/no user input after the chatbot has make her a question.

AIML and Alice

The majority of those techniques, reductionism, context awareness and others, were proposed by doctor Richard S. Wallace from 1995 to the earlier 2000’s, when he proposed the AIML (Artificial Intelligence Mark-up Language) specification. AIML is a XML dialect aimed to the definition of chatbots (although the recursion of the reductionism can allow some backtracking programming, and can be used as a deductive engine like with Prolog). The creation of AIML and dr. Wallace’s work eventually led to the creation of the Alice Foundation, devoted to the promotion and adoption of AIML and open source based chatbots. The Alice Foundation is also responsible for the maintenance of Alice, an open source set of AIML files, with more than 90.000 rules, that conform an open source chatbot (and give its name to the Alice Foundation). Many of those 90.000 rules are designed to manage reductionism in normal English conversation. Thus, to write a new chatbot, Alice conforms a complete set of rules to start from. A new chatbot can be written starting from Alice: discarding many rules that give Alice its personality; adapting or rewriting others (like greets or conversation openings), to reflect the new chatbot personality, and then; writing new rules –that will be triggered by the user directly or through reductionism- that would make the new chatbot different to others.

When writing a chatbot

There are many technical considerations to make when writing an AIML file. Reductionism must be implemented properly to avoid infinite loops. Rules should be well though to make the best use of reductionism; and the new rules should be flexible enough to be modified in the future without many dramas. Many programming patterns and best practices apply here to write AIML rules that comply with those requirements.

However, we cannot forget that when we write a chatbot, we are kind of impersonating a character. That means that before of the technical considerations, we must design the character and write dialog lines, just as we would do when writing a video game or movie plot. The technical work should come after the art work.

In addition, the efficiency of chatbot writing is highly unpredictable; as much as humans beings are. We are dealing with people here, and not people dealing with a given computer interface, but people talking, and of any topic they choose to. Experience is a plus, but it will be impossible to anticipate all reactions, topics and way to express themselves that users will input in the chatbot. That is why testing is paramount. After you write your chatbot, you have to carefully read all the conversations that people have with it. Take notes whenever the chatbot says something that it character counterpart would not say, and fix it. You will never achieve a successful chat with 100% of the conversations, but you can aim for a 95% success rate to declare your chatbot capable and well written.

Designing Turing Adventure Part Three: Turing Adventure: Overview and Design Choices

This is Part Three of an ongoing series of post describing the design of Turing Adventure. You can try the game here.

Turing Adventure box art

Turing Planet box art

Turing Adveture is a ten minutes long point and click adventure in which I tried to address the issues exposed above. We developed it for the Adventure Jam, in two weeks, so the game is short.

I propose a Universe in which there is a robot planet, called Turing Planet, inhabited by a mysterious race of robots that were in war against humans a hundred years ago. Those robots are completely isolated and don’t have any contact with humans, but would kill any unfortunate human being who happen to get close to their planet.

Naturally, the human race is worried about what robots are developing so secretly, and send spies, disguised as robots, to try to find out what they are up to. You are one of those spies, who unfortunately was caught accessing sensitive data. Robots, however, are not very sure whether you are human or not, so they have imprisoned you. This is where the game starts.

Intro comic for Turing Adventure

Intro comic for Turing Adventure

Turing Adventure is a more or less typical graphic adventure otherwise, but there is no dialog lines for the player to choose when interacting with others. Instead, you type what you want to say. That makes the player feel a closer relationship between her and the main character, since the words the latest says are her very own.

This is not like typing commands or trying basic conversations in old conversation adventures. When I say that you can type what you want to say, I mean it. You can question NPC’s about the quest at hand… Or you can chat with them about your job, your hobbies, your dreams… Whatever! They will respond!

This approach forces the player to stop, analyze the situation and, for the first time in graphic adventures, really think: what would I do, and what would I say, if I were actually there?

There are no commands, just chat naturally with the robots

There are no commands, just chat naturally with the robots

Talking to characters that can give you a reasonable conversation about any topic has the risk of inviting the player to try to have fun with the chatbot instead of playing true to her character. That’s why in the game, you have a timing factor to overcome. The countdown on an artifact that looks like a bomb, the music, the light… If you ask the robots about those elements, they will inform you in a casual way: You are dying in five minutes.

All is designed to give the player a feeling of urge and stress, so she won’t mess around and try to focus on the task at hand: escape the prison.

Don't fool around, try to escape before it's too late!

Don’t fool around, try to escape before it’s too late!

I expect all this elements to help the player feel a deeper connection with the main character, since they address the two problems I discussed above.

Moreover, players will empathize with non-playable characters more than usual, thanks to psychological known effects of human-machine interactions, like the Eliza Effect. It is proven that humans tend to feel empathy with machines that show some human qualities: whether it might be a humanoid body, a face or other medium to show emotions, or, in this case, the ability to talk. That opens a new dimension for non-playable characters in games, where they can show their personalities not only in their design, background, and a few dialog lines, but in whole conversations.

We are diving further in this topic in further posts.

Finally, although sometimes surprisingly clever, chatbots, even those that we are using for Turing Adventure, cannot talk as humans. That means that they have limitations, and some responses feel artificial, out of context, or out of place… if you expect to talk with a sentient being.

But you would forgive those weird answer if you were talking to a machine, like a personal assistant (e.g.: Siri, Cortana). That’s why Turing Adventure is set in a robot planet full of robots: When they’ll give you some unnatural response you’ll thing: Well, robots… and won’t feel it out of place. At this point, you may know how much I try to fight things that are out of place.

 

Robotz! Can be cocky too!

Robotz! Can be cocky too!

And, speaking about digital assistants, I expect them to get much more in fashion during the next couple of years. So I hope Turing Adventure has arrived at the right moment.

Designing Turing Adventure Part Two: How to push the adventure feeling further

This is Part One of an ongoing series of post describing the design of Turing Adventure. You can try the game and vote for it in the Adventure Jam here.

Make the conversation matter

Convincing Glotis in Grim Fandango

Convincing Glotis in Grim Fandango

Once of the things that unnecessarily hurt the “I’m living it” sensation is the classical tree dialog system. Often, the NPC respond to your questions as long as you keep asking, so you don’t worry about what actual question you make. You can always go back in the tree and try other path. There are of course in some conversations some tree leaves that are triggers, but more often than not, they are easily avoidable, at least until you’re sure that there are no more dialog lines.

Some games present you with different options at the same time that are all triggers, but any of those options have the same consequences (with the only possible exception of choosing between several game endings -or between the light and dark side). Players choose between them trying to play true to their characters, but those choices are actually just illusions of freedom.

Other games are slowly innovating in this area. To name one, Kentucky Route Zero often presents the player several choices that have a small but persistent impact in the story. They are triggers, unavoidable, and have consequences fot the rest of the game. For example, you have to choose the name of the dog that accompanies you when you’re asked about its name, and the one you choose will be its name for the rest of the game. It is not a huge consequence, but after choosing poorly a couple of times, thinking that you can go back to explore the rest of the dialog, you start thinking twice the answer you give. It’s a small change, but is a huge deal regarding gameplay.

However, no matter how much we tweak it, the real thing preventing you from impersonating the real adventurer is the mechanic of choosing itself! What if we get rid of it? What if you have to think exactly what you are saying?

Make the timing matter

Timed mission in Far Cry 3

Timed mission in Far Cry 3

Other aspect in adventure games that make the sensation you feel while playing different to what, say, Indiana Jones may feel in the film, is the timing. Don’t get me wrong: a puzzle should be a puzzle, with time to think it over, and it’s ok to combine relaxing events with rush times. However, you don’t think the same way, or more important, feel the same, when a giant rolling ball is about to smash you, than when you know you are safe. That is the reason why in adventure games you do things that are at odds with what a character of a book or a movie would do.

Many players would enjoy exploring the Far Cry world (I certainly do!), or try to complete all the bits of the game. On the other hand, 007 would take a moment to admire that amazing Asian landscape from time to time, but he would never stop the mission just to see what’s there, or to speak with every single citizen of a village. And we want the player feels like 007, right?

I believe a right balance can be achieved, and that includes stressing the player in some parts of the game, according with the necessities of the narrative.

Besides, giving the player a peak to an awesome environment but not letting him explore completely will make her feel that this environment is real and even infinite. Isn’t that awesome?

Designing Turing Adventure Part I: Motivation: Feel the adventure

This is Part One of an ongoing series of post describing the design of Turing Adventure. You can try the game and vote for it in the Adventure Jam here.

Motivation: Feel the adventure

There is something somewhat wrong with adventure games.

You play them to live an adventure. Years after you completed a game, you look back and what you remember is you, incarnating the adventure’s hero, penetrating dungeons, confronting monsters and saving the day. Well, this is what it is supposed to happen, and it works. So, what’s the problem? The problem is that the following comic is funny:

Coming back into a dungeon because the player forgot to check a corner

Jago dibuja

Can you imagine Aragorn doing that in the book? Or Indiana Jones when exiting the Temple of Doom? No, that’s wouldn’t make sense. Nevertheless, it is quite accepted in video games.

So the problem is the game mechanic. The game mechanic is fun, but not always supports the history, and thus, do not support the adventure experience. Current game mechanics not only allow the player to behave in unnatural ways for the history, but sometimes encourage it. At the end, they’re fun, and buy gameplay time. But that is not the adventure. The result is a crossover between a casual pastime and an adventure experience, but, guess what: In some years, you will remember the adventure, not the inventory management or the minigames.

Not convinced? Let me give you a clearer example:

Me/in/game #19 RPG's-So Many Ways to Play!

If you enjoy and play graphic adventures or computer RPG often, tell me that you don’t play them the thorough way. You won’t, because, well, of course you do. This comic opened my eyes. It made me realize that you are doing it wrong (and me too).

In years to come, when you remember the graphic adventure, you won’t remember going thoroughly through the conversation tree, you’ll remember the adventure. And you’ll remember it as something exciting.

Let’s recap a bit: The game mechanics are fun to play, they increase the gameplay time, and if the story is worth it, you’ll get the memories of living an adventure, ala Total Recall.

Then why do I insist that there’s a problem on it? Because, if not for the game mechanic, you’d have the sensation of living the adventure while you are playing it, not in the future. And that would be definitely be better.

Yes, it is true that you have the sensation of living adventure when something happens, but in the meantime, you have the sensation of solving cardboard puzzles: No dangers, no excitement, no rush… no adventure!

We can do better.