As I have written about this in earlier posts, at the remote, underground Gamblit Gaming command center we are tirelessly laboring on creating content, know-how and technology to power the next generation of gambling games to hit casino floors in the midst of an unprecedented transition that’s sweeping the industry.
You may have guessed by now that we are talking about the addition of the element of skill to games that have traditionally been entirely based on chance.
In our particular case I’m not even referring to more traditional slot-machines “lightly sprinkled” with skill-based mini-games or bonus rounds – I’m sure others will do an excellent job at covering that market segment.
What we are aiming for are experiences entirely new to the casino floor: games you would normally expect to see on your mobile phone, on your computer, or in an arcade.
Of course with the introduction of skill into the gambling experience comes a whole new set of player expectations and dynamics. As Jeff Hwang puts it in his excellent analysis of skill-based gambling games:
“…If you’re going to require skill, the player needs to be compensated for having skill.”
Compensation for skillful play immediately implies some form of quantification of the player’s skill in order to help determine the outcome.
This can be done in a fairly straightforward, intuitive and elegant manner in the context of multiplayer games such as the 2-to-4 player head-to-head games on the interactive tables. The skill-game and the wagering proposition complement each other seamlessly there: the player measures their skill against the skill of other players, and if they prevail they collect the prize at stake.
However, things get a lot more tricky as soon as we venture into the realm of house-backed single-player games – and the term “tricky” is probably an understatement, as these games and experiments yielded some of the most interesting and complex game-design challenges that I have encountered throughout my career.
Without going too deep into the truly nitty-gritty details I thought I’d provide a glimpse into this fascinating set of of problems through taking a closer look at some aspects of one of the numerous skill-based “gamblification” methods we are applying to in-development games.
But first, consider this: since we are talking about strictly regulated, reputable and fair gaming products (with none of the proverbial shadiness of carnival games), there’s a number of ground rules we have pay close attention to at all times. Some of these are:
- The game has to be fun – this is pretty much a no-brainer.
- The wagering proposition has to make intuitive sense, and outcomes have to feel fair in the context of skill (“the better I play, the better my results generally are.”)
- The math-model has to comply with minimum RTP (return-to-player) requirements (these vary by jurisdiction, but the baseline is around 75% or higher).
- In some jurisdiction (e.g. Nevada) it has to be mindful of RTP-drift restrictions – this is typically a maximum of ±4% from theoretical RTP over n number of plays. (I suspect this requirement is going to be phased out as skill-gaming regulations continue to evolve. Unlike with purely chance-based games – where RTP-drift over many games played indicates something going really wrong – the same drift in skill-influenced games merely indicates meaningful, but harmless fluctuations in aggregate player skill.)
- The math-model can’t utilize adaptive behavior – i.e. the game’s overall return can’t be adjusted based on previous results. This is flat-out illegal in most (if not all) jurisdictions.
- The game has to present a predictable financial performance-profile, protect the house from potential exploits, offer multiple pay-models and other miscellaneous operator features, etc.
With all that out of the way, let’s continue with our original objective, and look at one of those “gamblification” approaches.
The example in question comes with a simple basic proposition: 5 payouts per wager are generated by an RNG (random number generator), and the player has to perform a series of actions requiring skill in order to actually collect those payouts.
To offer a more specific example, one of our games called Lucky Words (shown above, and recently presented at the NIGA Convention) uses a similar methodology: in its particular case the payouts are collected by connecting letter-tiles to form words.
Well, this sounds fairly straightforward.
Once a handful of test-players are exposed to the most basic, bare-bones implementation of the approach, and they get a chance to play a few rounds each, this is what ends up happening:
While their theoretical RTPs (derived by projecting their measured performance over a very large number of games) neatly line up between 76% and 96%, quite predictably their actual RTP values (the returns and outcomes they experienced over the course of their engagement with the game) are all over the place, wildly fluctuating between 48% and 137%.
Even more importantly, there’s little correlation between their performance at the skill game, and the wagering results they have experienced.
Let’s take Player 1 for example: even though they performed relatively poorly at the game with only 2.7 words formed per round on average, they hit a substantial payout somewhere along the line, and walked away with a juicy 126.6% return.
Conversely, Player 4 did really well with an average of 4.6 payouts collected (a fact that’s also reflected in their 96.2% theoretical RTP), but as far as their immediate experience is concerned they finished with a dismal 64.8% realized return. Ouch.
Granted, it’s unrealistic to expect an RNG-based payout generation mechanism to consistently reward you for good performance every single time, but there must surely be something that can be done to tame this beast, you ask?
Well, the answer is most certainly yes, and the various models to reconcile design-tensions like this is what we are busy identifying, developing and testing with players.
Without diving into the specifics of how exactly this is done let’s take a quick look at the following example, where we can observe what happens if player-performance data collected during a set of subsequent games is mapped to and back-tested against 4 different difficulty-profiles. Each of these profiles represents a carefully calibrated skill-requirement “ramp”:
As you can see, the effects can be fairly dramatic even tinkering with just this one “lever”: with the first profile the RTP-gap between the lowest and highest performer is still a full 38 percentage points, whereas by moving to the third profile the same gap drops down to 22 points, most noticeably lifting the performance of less skilled players.
In essence these difficulty-profiles either amplify or dampen the impact of skill on the overall outcome, shifting the bulk of the benefits between the extreme ends of the skill-continuum.
Of course all this is barely scratching the surface, but going beyond this point without stirring up the proverbial “secret sauce” is becoming exponentially more challenging.
This is where we cross over into the domain of more advanced tools and techniques like RTP “bracketing” to guarantee minimum payouts, methods of managing flattening volatility to prevent draining the fun out of the experience for more skilled players, avoiding inadvertent “wealth transfers”, and throwing various pooling mechanisms into the mix, which can offer highly skilled players exceptional returns, yet keep the overall return of the game firmly at the target level.
Each of these topics is big enough on their own to warrant an entirely separate discussion at a later point.
Thank you for sticking with this all the way to end – to be continued. 🙂