improving the EteRNA game and workflow to be more effective

  • 2
  • Idea
  • Updated 5 years ago
  • (Edited)
The biggest roadblock I personally see with the current state of EteRNA is the integration of the game interface into the collective gameplay flow, something which is not completely clear or even understood, at least for me. Personally, I don't even think we actually have a solid game structure. Let me attempt to explain.

When I first joined EteRNA, I was put through the tutorials, and from there I was given more puzzles as I completed them. This provided an obvious goal and game for me to play-just solve more puzzles.

As is, the 'primary workflow' of EteRNA is presented as the challenge/player puzzle flash app. You start in it with the tutorials, then you're brought into it once you complete them. The labs happen to use the same interface, but it is not a part of the primary game workflow mechanism, so it comes across as something on the side.

So there's two important questions here: Where do you want players spending their time? At the same time, what is the game? For something like Foldit, it's quite easy, where you have a series of puzzles at any given time, and you are working to improve your score in all of them. Here in EteRNA, we don't have such a metric, so in labs, it is, to some effect, blind design with analysis. Personally, I think the "game" needs to change. Solving puzzles compared to an energy model is looking more and more useless to me. In fact, the energy model seems to be more of a tool, not something that is a deciding factor. What needs to be developed is game mechanics focused around DESIGN and ANALYSIS. These are the critical gameplay elements, they just need to be gamified. Or, there's something more that EteRNA players should do, in which case that needs to be integrated into the game.

So coming back to the interface. Once we have our game solidified, we can then appropriately design the user's workflow, from the site, to the game, between game 'challenges' (i.e. Puzzles), and back again. Honestly at this point, there isn't really an issue of having lots of other things you can do, it's more of the fact that we don't know what there is to be doing. Both the current game and sure interface seem to be promoting a workflow that may have worked before, but now appears to be outdated. In my mind, first we need to restructure, then redesign various parts of the user workflow interface, if you follow my drift.

I suggested a 'personal lab'/dashboard/portal as the user's homepage previously. This is to keep everything in EteRNA integrated and highly visible, which is definitely lacking right now. Similarly with collaboration, I feel it could be much better defined by revamping groups to 'work groups', which have their own dashboard and give groups the appropriate tools to pick a task for everyone to work on and to go accomplish it. Probably in this model, the personal dashboard would be 'my lab workbench', groups would be 'labs', and the things you do it the 'labs' would be on your workbench. Why labs? Because we're being scientists, using digital tools to perform experiments. We should support this mentality, I think, and design for that.

And remember RNA-AW? Yeah, things like that should exist, and it is related to this. It's the idea of having concrete things to be doing, and is both fun and useful.

There is some great stuff going on in EteRNA, but a better game, personal workflow, and collaborative workflow would help to provide needed structure and encourage new players to stay with us. The tutorials are an important step, but what we're all doing here at EteRNA needs better definition. Better structure can create better tools, and with a better structure and better tools, we can do more and more fully enjoy what we're doing.

Definitely looking for some feedback on this. I know I'm suggesting something relatively big, but from my perspective the current model doesn't work. However, I'm sure there are holes and plenty of room for improvement here.
Photo of LFP6

LFP6, Player Developer

  • 639 Posts
  • 109 Reply Likes

Posted 5 years ago

  • 2
Photo of Astromon

Astromon

  • 204 Posts
  • 30 Reply Likes
very true and great ideas lfp! I also mentioned  getting labs in the point system a couple DEV chats ago. As of now there are no points for designs nor design winners?! Nothing is "geared" towards Cloud labs. (the flow). As lFP has seen and said in the above post this needs to change as soon as possible. A back burner on this simply will not do please get this done Thanks!
Photo of bekeep

bekeep, Learning Researcher

  • 98 Posts
  • 20 Reply Likes
Thanks, LFP.  Lots of important points here.  The dev team has been discussing some of these big picture issues and we've been bouncing around some ideas.  Let's have this thread be the place to discuss specific improvements.  I'll do a follow-up reply next week.
Photo of whbob

whbob

  • 218 Posts
  • 68 Reply Likes
As the complexity of the Labs evolve, the tutorials/challenges will grow.  If I read a book on how to paint fine art in a day, I suspect it will take me a little bit longer to actually master the technique:) To me the game has been the challenges.  Lab designs haven't followed the challenge format. Design analysis is pretty much non existent to me.  I don't feel like I know enough to make suggestions, but here are some thoughts on the subject: 

I, too, started with challenge puzzles.  Finding a solution to the puzzle at hand was the immediate challenge.  Just one of many possible solutions would do the trick. Bubbles, yea.  Points, yea.

I can see how many players have solved this challenge, but how many possible solutions are there?  Don't know.  How unique was my solution?  Don't know.  Would anyone but me care?  Don't know.

Next challenge.  Repeat. Accumulate badges as the number of challenges are met. No bubbles for badges?   I'm not a quick study, so repetition is important to me.  Knowing what doesn't work is as important as knowing what does work.  Next challenge.

I can get a badge if I vote in a lab, so I vote.  I would have thrown a dart at my monitor, but thinking better of it, I blindly click the mouse.  I have no guidelines to use.  The data is just that, data.  None of it seems useful.  The lab is so far past the simple single state molecules I'm working on in the challenges, I feel like I'm in a cab with the meter running and no place to go.

At some point I blindly enter the current labs and start making a design as opposed to filling in a target mode pre-designed molecule.  I figure it out and meet the criteria required, but there are no bubbles, nothing?  What a let down.  Instead of advancing in the game, I feel like I've been sent to the dungeon.  

I've submitted a design.  At least give me some bubbles, please.  Maybe some Lab points too? 

I'm in the Lab and there is no going back.  I guess I'm stuck here in an endless loop, doomed to submitting designs forever? These designs are just advanced challenges.   How about Lab points for each design accepted?  There might have to be a two point structure.  If you just modify a design where the shape stays pretty much the same, maybe half points.  If you modify the shape quite a bit, maybe whole points?  Would that insure diversity?  

How important is Eterna Score?  I've seen a design where the molecule bound it's ligand but scored poorly.  I've seen a design that failed to bind it's ligand but still scored very high. How should we weigh Eterna Score vs Diversity?  In a way that keeps players in the game, I hope.   
 
Summary: Put some bubbles in the Design acceptance process.
                 Reward diversity, uniqueness & Score.
                 Use Eterna U. to make Lab Analysis possible for all of the players.
Photo of bekeep

bekeep, Learning Researcher

  • 98 Posts
  • 20 Reply Likes
Hi whbob,

Many thanks for your thoughts.  Hope we can make the dungeon into a nice villa for you (and sorry you feel that way!).

Good point about the mismatch between the puzzle experience (bubbles!) and the lab experience (...).  There is indeed a wide gap between the typical puzzle task and the lab task.  We are definitely working on bridging that gap.  Both through the puzzle progression but also through puzzles that would guide players through a given lab.

We've also discussed changing the incentive structure to, as LFP6 put it, reward design and analysis.  For example, adding a currency that players earn and spend.

There are also improvements planned for the lab viewer that would enable players to use the data that is there more effectively.  Learning how to use the data to improve designs is crucial and we hope to fold this into a currency system.

More on changes as we plan them.
Photo of whbob

whbob

  • 218 Posts
  • 68 Reply Likes
Sorry, I didn't mean to make the labs sound so bad, but it did seem like getting into a confusing place at first.  It lost the "game" feeling.  I'm getting the hang of it now:)    
Photo of Astromon

Astromon

  • 204 Posts
  • 30 Reply Likes

Its good to get an idea of what one could perceive the labs to be , nice observations.

I would like to see a planetarium type backgrounds and maybe a supernova could happen when I meet all the constraints and submit a design!

Photo of nando

nando, Player Developer

  • 393 Posts
  • 74 Reply Likes
Just throwing out an idea that I mentioned in chat a few days ago.

Let's imagine that at the end of each round, right after the experimental results come back from the lab, we calculate a global lab score per player. A possible formula would be to sort all designs submitted by a player in decreasing order of their individual experimental score, take the top N of them (I would suggest N = 10% of the total number of slots available for players in that round globally), and sum these scores.

Then, the top 10 players for that round would get a "Great Lab Designer" badge, and the very best player would receive a "Best/Top Lab Designer" badge.

A special web page (featured as prominently as possible) should display the results of say the last 5 rounds, listing for each of them the 10 most successful players. And every player should be able to see how they fared in these rounds.


In essence, what I'm proposing is to split the game areas more clearly. As I said recently in chat, player puzzles in Eterna could be compared to a certain extent to the creative mode of Minecraft. And that game, last I heard, is doing quite well while having different modes. Maybe Eterna could do the same. Puzzles (challenges and player-created ones) could be seen as one mode, and labs as a different one.

And if labs are a different game mode, I feel that it needs to get some form of rewarding, which doesn't necessarily have to overlap with the points used in the puzzle mode. The scheme I'm proposing above would have the advantage of giving a measure of "pride" or "bragging rights" to players who went through the trouble of not only participating in labs, but also carefully designing their sequences for a maximum experimental score. I also see a couple valuable properties associated with the apparently arbitrary 10% measure I indicated above: for one, it would incentivise participation to at least a modest level, without requiring players to fill 100% of their slots, and it would also leave adventurous players with 90% of their slots to experiment with crazy ideas and fail as much as they want without impeding their chances for a lab badge.

So, what do you guys think of that idea?
Photo of Astromon

Astromon

  • 204 Posts
  • 30 Reply Likes
I like all those ideas. I don't see anything about points and overall ranking being affected though and I think it is deserved and needed in eterna.
(Edited)
Photo of LFP6

LFP6, Player Developer

  • 639 Posts
  • 109 Reply Likes
I do agree. Personally, I would remove the current scoring (I know this would be quite controversial, and may or may not be included in what you're considering here), with lab scoring such as that as the primary score.

For challenges and player puzzles, I might suggest changing to scoring to less of a score, and transform it into multiple metrics to focus on how many puzzles you have completed, and how hard they were. While some of the metrics I'm thinking of to determine hardness don't exist now, and might be hard to implement, just as a starting point I'm thinking:
  • A 'standard approach' bot as has been previously suggested to check for spacebar, Christmas tree, etc.
  • A bot or few that uses some more advanced techniques (if RNA-AW was something that was actually wanted, this would be a good place to use/promote that kind of thing)
  • Number of solvers
  • Time the puzzle has existed
  • Who the solvers were (their scoring metrics)
  • The puzzle creator (their scoring metrics, and the difficulty of their other puzzles)
  • Time spent by players before solving the puzzle
  • Number of mutations used by players before solving the puzzle
  • Number of resets used by players before solving
This may be too much work compared to the benefit, but I think it would help treat these puzzles more as logic challenges than a primary gameplay goal, as well as being a better metric for player puzzles (which has been discussed already, and I think was actually wanted anyhow).