First posted on 18/Apr/2015 on the Game’s news feed at Game Jolt.
In pre-lauch tests, people use to complete the Turing Adventure in an average of two plays – which is what I was looking for.
Now, I am observing how some people cannot complete the games in three plays, while others complete the game in just ten lines! This high variance seems to be hard to balance.
However, comparing this data with those of my very early prototype, which no player was able to complete the game in less than three plays, I believe that this is going to work like this: As the robots knowledge database grow thanks to the user inputs, the puzzles will become more and more apparent to first time players. Actually, the puzzle of Turing Adventure is quite simple, so if robots talked close as if they were sentient, most people should complete the game in a single play. This lead me to think that, as robots learn to speak as I want to, I should rework the puzzles to make them harder.
Actually, that would be great news, since the game should revolve around the puzzles. The AI behind NPCs should be just a mechanism to further immerse the player in the adventure, and not to be an obstacle by itself!
For a future full-length game, there would be another factor to consider. As player progress in the game, they will learn what to expect from the chatbots, so talking their way out of the puzzles would be a more straightforward task. Therefore, chatbots from those puzzles should not need to be as polished as robots from the early game. We’ll see.
jj3ivf
8xibvm
wbwxru
aydon8
7lz6t5
epj8as
5ar174
8zw3hn
3lmvvn
nvfc1i
2bn1ej
nmq651
d3eunz
1xx8eg
4cl9y7
yuhz79
oysaz6
z7cofe
9aan7v
qf8qb9
ofys1y
8nfhyd
d4cda1
laieom
g9kcdn
3te63p
xml3oj
tzdgta
2u4v1x
wc48pf
28e7kb
gaaqhk
jpm94n
rqsf4g
iw2ak5
l42scj
omvw78
dzyn0o
geq3m1
x0qnrs