Axelrod-Python/Axelrod

Desired New strategies

marcharper opened this issue · 73 comments

Some of these may be implemented under other names already, please ask if you are unsure! Feel free to add any new ones to the list. Note that we are happy to have original contributions as well!

  • Binary decision strategies defined in "Varying Decision Inputs in Prisoner’s Dilemma", Barlow and Ashlock 2015
  • Function stack based strategies from "Ashlock, Daniel. "Training function stacks to play the iterated prisoner's dilemma." Computational Intelligence and Games, 2006 IEEE Symposium on. IEEE, 2006."
  • Pavlovian, Identifier strategies, Grudgian from n-Move Memory Evolutionarily Stable Strategies
    for the Iterated Prisoner’s Dilemma

The "invincible strategies" in this paper which can all be implemented as special cases of the MemoryOne or LRPlayer classes.

The two "most abundant" memory one and memory two strategies in this paper.

Adaptor from Simple Adaptive Strategy Wins the Prisoner’s Dilemma second_pdf

Specific strategies evolved in Evolutionary game theory using agent-based methods such as GCA.

Strategy MO and Strategy SO from this paper

Strategies implemented in PRISON (look in classics.str):

  • soft_spiteful
  • slow_tft
  • better_and_better
  • worse_and_worse2, worse_and_worse3

and see this paper

  • spiteful_cc
  • winner12 winner 21
  • mem2
  • gradual_killer [Already done on another name?]
  • soft_tf2t [TF2T?]
  • and many others such as the 12 ZD strategies
  • Done: c_then_per_dc, doubler, easy_go, gradual, per_ddc, per_cccdcd, prober4, tft_spiteful, worse_and_worse

From CoopSim:

  • ContriteTFT
  • TwoTitsForTwoTats -- and the generalization to NTitsForMTats
  • Others that you find interesting

Many strategies in this paper are not yet in the library:

From "Exploiting Evolutionary Modeling to Prevail in Iterated Prisoner’s Dilemma Tournaments":

From this page (see also the bibliography) for the 20th anniversary tournament:

From here:

  • Free Rider
  • Rover

From this paper and also here:

  • adaptive tft
  • contrite tft
  • handshake fortress3 fortress4 firm but fair gradual naive prober remorseful prober reverse pavlov soft grudger

Any of the interesting finite state machine strategies in the papers with fortress (and other papers authored by Wendy Ashlock and Daniel Ashlock, and collaborators)

  • E.g. from the 2015 paper "Multiple Opponent Optimization of
    Prisoner’s Dilemma Playing Agents" including the unnamed sugar strategies and treasure hunt strategies in figures 2 and 3
  • Solution B1 and Solution B5
    Also from "Fingerprint Analysis of the Noisy Prisoner's Dilemma Using a Finite-State Representation"
  • vengeful, PSY, PSY-TFT, TFT-PSY, UD, UC

Many from this paper. Note the several are already in the library, including ALLC, ALLD, TFT, WSLS, willing, hopeless, and desperate (and possibly others).

From these two papers:

From this page:

  • forgiving
  • nasty TFT (randomly plays DD)

From the mythical tournament preliminary to Axelrod #1:

  • Analogy
  • Look Up / Look Ahead (different from LookerUp in the library)

From this publication:

  • Gradual
  • Adaptive tit-for-tat

From this paper:

  • Lenient Grim 3
  • Exp. TFT
  • False Cooperator
  • TF3T
  • Exp Grim 2
  • Lenient Grim 2
  • Exp TF3T
  • T2

From this paper:

  • shortmem
  • selfsteem
  • Boxer
  • VeryBad
  • ANN Agents
  • GADP1
  • GADP2
  • BM
  • MC
  • Stalker

From this library (if the license is compatible):

  • cautious
  • copycat
  • craby
  • forgetful
  • golden
  • Hardy
  • Mean
  • Mensa
  • Moron
  • Observant
  • Unforgiving
  • Waffely
  • killer

Others:

No-tricks
Strategies described here

Theory of mind strategies discussed here.

Would be neat to have strategies based on:

  • cellular automata / finite state machines e.g.
  • bandit algorithms
  • the memory-based strategies described here
  • Markov chain Monte Carlo
  • Neural networks See this paper for examples
  • "Particle Swarm Optimization Approaches to Coevolve Strategies for the Iterated Prisoner’s Dilemma"
  • Tree based strategies from "Crossover and Evolutionary Stability in the Prisoner’s Dilemma"

Translate Fortran strategies available in https://github.com/Axelrod-Python/axelrod-fortan to python.

This is a brilliant issue: 👍

This strategy was mentioned on reddit (Random TitForTat and/or Grudger: defects with random probability):

https://www.reddit.com/r/GAMETHEORY/comments/480xb3/how_to_beat_this_strategy_of_prisoners_dilemma/

Would you mind if I started working on a few of these? I'm sure I'll be slow since I'm new to programming and I'm going to school while working full time, but it looks like they've been posted for awhile. If there are any strategies on the list that are particularly easy to implement I'd be happy to start with one of those.

Absolutely @Adam-Flammino: please do!

I suggest that when you decide on a strategy you run it past us just to make sure it hasn't been implemented yet (it's possible that they're on the list but have already been implemented).

It could also be worth checking here: http://axelrod.readthedocs.io/en/latest/reference/all_strategies.html

That's the list of the strategies that are definitely in the library :)

As far as a suggestion for an initial one to go for, perhaps pick "Nasty Tit For Tat" which is described as "tit-for-tat but attempts to exploit non-retaliatory strategies by playing a DD with some probability". If you think there's another one in the list that you like the look of please do go for that though :)

Also: if you need help you can always pop in to this chat room https://gitter.im/Axelrod-Python/Axelrod there are usually a few of us around that can help :)

That actually sounds a little similar to how I play risk. I've definitely got some more reading to do before I start, but I'd be happy to try. Thanks for the chat link too!

@Adam-Flammino We're happy to e.g. help write tests, just open a PR or ask us here (or on gitter).

@marcharper thanks! Haven't gotten to start yet, but I appreciate the help! Hopefully before too long I'll be able to actually help out with the project- it looks interesting

So I got to actually sit down and look at things today. I appreciate all the variations of tit for tat being in the same .py file- makes it easier for someone who's still learning like me to see the right format. Another quick newb question- should I submit a new .py file with my pull request or just add my code to the end of the existing file?

@Adam-Flammino Either way, really. We don't have a scheme for which strategies go in which files, we just group them as we go wherever they seem to fit.

@marcharper Cool. Thanks again.

If you're going to go with nasty tit for that as the strategy I'd suggest that you just add that to the bottom of the existing tit for tat file:)

I appreciate how welcoming you all have been and I do think this is a very interesting project I'd like to help with eventually, but some life changes just came up and I don't know when I'll have time to actually contribute to this. I plan on coming back (hopefully with enough experience to make real contributions), but I'm not sure when that will be so don't hold any of these for me.

@Adam-Flammino No problem - thanks for the contributions so far!

We'll still be here when you have time again. All the best.

I would like to help in writing some of the strategies.

@souravsingh Are you here at PyCon UK?

@meatballs I couldn't come to PyCon UK due to visa issues. But I can try and help remotely.

Welcome @souravsingh! Drop in to our gitter channel: gitter.im/Axelrod-Python/Axelrod

Otherwise, take a look at the contribution guidelines and don't hesitate to ask any questions :)

http://axelrod.readthedocs.io/en/latest/tutorials/contributing/strategy/index.html

Hi. Is soft_tf2t up for grabs?
gradual_killer needs crossing out because it's already implemented.

@mturzanska I'm afraid that that's in already to under TitFor2Tats: http://axelrod.readthedocs.io/en/latest/_modules/axelrod/strategies/titfortat.html#TitFor2Tats

(Please correct me if you think I'm not quite right reading the two logics).

You're right. Then continuing from the top:

  • c_then_per_dc exists (as GrudgerAlternator)
  • per_cccdcd needs implementing (as a subclass of Cycler)
  • prober4 needs implementing
  • doubler needs implementing

Please let me know if I sorted it out right this time.
Would you mind if I start working on those?

Would you mind if I start working on those?

That would be awesome! Let us know if we can assist in any way :) 👍

Are Resurrection and DoubleResurrection strategies implemented? If not, I would like to work on them.

I don't believe so, what reference are they from?

They are from CoopSim

These are not yet implemented, go for it!

Hi guys! I'm new to open source software, but would love to contribute to your project.
Is it okay if I try implementing better_and_better, worse_and_worse2 and worse_and_worse3, from the PRISON PROJECT?

Hi @varung97! Welcome to the project!

Those are up for grabs so it would be great to have them in.

If we can help please let us know. We have a gitter room here: https://gitter.im/Axelrod-Python/Axelrod feel free to drop by :)

@drvinceknight Apart from type-hinting, I think I would like to work on a strategy.
Are these taken or already done?

cautious
copycat
craby
forgetful
golden

Hey @janga1997 sorry I saw this when I was busy and then it slipped my mind :)

I've taken a quick look through the matlab files that I seem able to read and to the best of my ability it looks like they're not done yet.

Probably best course of action is that once you identify a particular strategy if you can briefly say what it does we should be able to be a bit more confident in our ability to identify it :)

FYI here's the list of all the strategies currently in the library: http://axelrod.readthedocs.io/en/latest/reference/all_strategies.html We're slowly working towards including alternative names on there too (done for some of them).

Hello ! I will be working on implementing those strategies :

  • the generalization NTitsForMTats
  • S (Win-stay lose-shift, Nerzhin): It cooperates if and only if it and the opponent both played the same strategy on the last round. (from No-tricks)
  • memory-based strategies

Welcome @Chadys :)

Take a look at the strategy index that has everything currently implemented here: http://axelrod.readthedocs.io/en/latest/reference/all_strategies.html

the generalization NTitsForMTats

We have a generic LookerUp strategy that could be used to make this :)

S (Win-stay lose-shift, Nerzhin): It cooperates if and only if it and the opponent both played the same strategy on the last round. (from No-tricks)

This is already implemented as WinStayLoseShift. (As a memory one strategy)

memory-based strategies

We have quite a few strategies implemented that make use of memory. We have a generic class for Memory One strategies for example and also much longer member strategies. If you had specific ideas feel free to let us know and we might be able to advise :) 👍

Sorry about the WinStayLoseShift, didn't read the index carefully enough I guess.
Yes, using LookerUp is an idea, I'll think of a few solutions before deciding how to implement that (I'll probably ask for advice on gitter).
Based on http://www.briannakayama.com/Research/UCNCpres.pdf, I thought about doing a generic strategy you give a number to as argument and that create a lookup table (using LookerUp here obviously) by converting it to binary. The length of the memory would then depend on the log of the given number.

@Chadys Welcome! FWIW I think it's probably better to implement NTitsForMTats directly without Lookerup, since the table could be quite large. I advise making the initial move C in line with the other variants, but you can make it an option if you'd like.

I can take care of the following from the PRISON list:

  • soft_spiteful
  • slow_tft
  • better_and_better
  • gradual_killer

Also, I think soft_tf2t is just an instance of a more generic NTitsForMTats that @Chadys is working on where N=1 and M=2. Basically it responds with a single defection only if the opponent has defected on the last two turns and otherwise cooperates.

Looks like better_and_better, soft_tf2t, and gradual_killer are already done so you can strike those. I'll be better about checking first in the future.

I'll be better about checking first in the future.

Not at all, we try and keep the issue itself up to date :) I've struck those out now.

I'm going to take a look at the FORTRAN code from the 2nd Axelrod tournament. I spent some time in gradschool translating an old genetic algorithm from FORTRAN into python so hopefully I can be of some use there. I know there was a moment on the Talk Python podcast where you guys said someone had already done some work in this area. Is there any list of which ones have already been implemented?

I started working my way through the FORTRAN code and I've gotten some (pythonish) psuedocode for the first 6 algorithms pasted in a gist here. I'll likely get a few more on the gist and then start actually writing the code, tests, etc. for those and then mark them as completed and continue on.

That's awesome @jtsmith2!

As well as the PyCon person who worked on the Fortran code @marcharper had a go a little while ago, here's an issue where he described what insights he managed to get #381

Hello! I'm glad to announce that I'll start working on BM(Bush Mosteller) starting from now on, I think I've gotten the whole theory down and am pretty confident that I can try to implement it.
Here's my source: http://www.intechopen.com/books/reinforcement_learning/dynamics_of_the_bush-mosteller_learning_algorithm_in_2x2_games

If any of you has any idea for l (learning rate) value,it is not given so I think I'll start with a solid 0.5 and make some tests.
If I need any help I'll pass by gitter or post something here if necessary.
Cheers!
-Gaëtan

@jtsmith2 There is a strategy file axelrod_second.py that has the strategies from that tournament, with a few exceptions (e.g. TitForTat is in a different file, it's famous). Note that a few of the strategies also appeared in the first tournament, see axelrod_first.py. IIRC there should be ~40 strategies from the second tournament that are not implemented in the library.

Once upon a time I worked on a few of the strategies from the second tournament, see this quite old branch. I never merged them because I wasn't sure if they were correct or complete. We're going for a "best effort" translation of the original strategies so they don't have to be perfect but good tests will be greatly appreciated.

Let me know if you have any questions!

@GGOUSSEAUD I recommend experimenting with the learning rate to try to find something good. Just run some tournaments and see what seems to work best.

@jtsmith2 If it's possible to run the Fortran still we could test each player's response to say the first possible 8 or 16 histories -- that is every permutation of opponent plays C and D of length 3 or 4 -- and use those values as tests.

Hello,
I've read the DBS article and started to work on an implementation

@edouardArgenson Great! Let us know if you need help writing tests (or anything else).

Hello .

Can i work on Stein and Rapoport strategy ???
If so , can you give me a guide (i suppose i have to add a new file and one more for testing it) ???

http://axelrod.readthedocs.io/en/latest/reference/overview_of_strategies.html#stein-and-rapoport
"
Stein and Rapoport
Not implemented yet

This strategy plays a modification of Tit For Tat.

It cooperates for the first 4 moves.
It defects on the last 2 moves.
Every 15 moves it makes use of a chi-squared test to check if the opponent is playing randomly.
This strategy came 6th in Axelrod’s original tournament.
"

Thank you

Hi @MariosZoulias, this strategy can go in axelrod/strategies/axelrod_first.py: that's where all the strategies for the first tournament go. Similarly the test file for it can go in to axelrod/tests/strategies/test_axelrod_first.py.

Here's the general guidelines on writing a new strategy but let me know if you'd like more guidance: http://axelrod.readthedocs.io/en/latest/tutorials/contributing/strategy/index.html

I think the main "tricky" thing with this strategy will be implementing the chi-squared test which I suggest you do by adding scipy as a dependency https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chisquare.html

And when you've made the strategy be sure to update the docs at http://axelrod.readthedocs.io/en/latest/reference/overview_of_strategies.html#stein-and-rapoport to say that the strategy is now implemented :D

@drvinceknight Thank you so much.
I will start working as soon as possible

@drvinceknight I am looking to implement the Eugine_Nier Avenger strategy from the link-http://lesswrong.com/lw/7f2/prisoners_dilemma_tournament_results/

Do we have to create a separate script or put it in TitforTat?

F (-, Eugine_Nier): Standard Tit-for-Tat with the following modifications: 1) Always defect on the last move. 2) Once the other player defects 5 times, switch to all defect.>

I think that sits nicely in the titfortat file 👍

Are the FORTRAN strategies from Axelrod Tournament 2 still open?

Are the FORTRAN strategies from Axelrod Tournament 2 still open?

There are indeed, if you find a particular one you want to tackle probably always worth double checking http://axelrod.readthedocs.io/en/latest/reference/all_strategies.html that it has not been implemented already or ask here :)

@drvinceknight Are there naming conventions for the strategies? I see that K61R is named Champion, K46R is named Eatherley, but K76R is named Tester.

  • Should I default to naming the strategies after their authors?

    • How about the case of authors having the same last name? K47R and K48R are by Richard Hufford and George Hufford respectively.
    • How about a singular strategy that has multiple authors? K60R is by Jim Graaskamp and Ken Katzen.
  • Or should I name them simply according to the Fortran source file (e.g., Champion would have been named K61R instead)?

@0101010001010111 (awesome handle btw): I think the more descriptive the name the better but feel free to make a judgement call (and it can always be discussed on the PR).

The main reason we went for author names for the first of Axelrod's tournament was because there were no other names to go for.

Note that we try to include all relevant names (in the case of strategies being called different things in different sources) in the docstrings: you can see examples of this here: http://axelrod.readthedocs.io/en/latest/reference/all_strategies.html#axelrod.strategies.grudger.Grudger

Hey i have 2 questions .

  1. I am working on the new strategy and i try to implement it into to axelrod _first.py as you said so i make the class "stein_and_rapoport" but when i try to create a match stein_and_rapoport vs Alternator it says that there is no stein_and_rapoport strategy in axelrod. So how do i insert my strategy into axelrod files (i followed the steps in docs) .

2)In stein and rapoports it says that every 15 turns the players does a chisquared test.
Question A) why does it run a chi-squared test ? I mean how does it change the way of the player's behavior (D or C) ???
Question B)For example when we are in 15th turn we use the whole history but when we are in 30 turn (eg) do we use the whole history (1-30) or the last 15 (15-30)?. I suppose that for chi-squared tests the more the data the merrier.

Thanks a lot
Marios

I am working on the new strategy and i try to implement it into to axelrod _first.py as you said so i make the class "stein_and_rapoport" but when i try to create a match stein_and_rapoport vs Alternator it says that there is no stein_and_rapoport strategy in axelrod. So how do i insert my strategy into axelrod files (i followed the steps in docs) .

It sounds like you're not quite following all the steps (but it's difficult to guess without seeing your code). From: http://axelrod.readthedocs.io/en/latest/tutorials/contributing/strategy/adding_the_new_strategy.html

2)In stein and rapoports it says that every 15 turns the players does a chisquared test.
Question A) why does it run a chi-squared test ? I mean how does it change the way of the player's behavior (D or C) ???

Here's what the description says: "Every 15 moves it makes use of a chi-squared test to check if the opponent is playing randomly."

So you need to do a chi squared test on the distribution count of cooperations and defections to see if that's statistically significantly random. (So whether or not it differs from player 50/50).

Question B)For example when we are in 15th turn we use the whole history but when we are in 30 turn (eg) do we use the whole history (1-30) or the last 15 (15-30)?. I suppose that for chi-squared tests the more the data the merrier.

Use the whole history.

Thank you for your answer.
But i still have a question .
Lets assume that the game is Stein_and_Rapoport vs Random ...
So the Stein_and_Rapoport player will understand that the random one plays randomly .
So how does this fact (that the opponent playes randomly or not) changes the move of Stein_and_Rapoport.
Do we search for it just theoritically (just to know if the opponent plays random) or practically (e.g. if he plays random --> we always defect, if not --> we play tit for tat) ???

Thank you

Hi @MariosZoulias -- the strategy isn't well-described but I assume that the Chi-squared test is used to determine if the opponent is playing randomly by some level of confidence, and if so, defect against it.

Thanks a lot for the answer.
I also have two more question (actually i need your advice if possible).
Working on the chi-squared test ,

  1. in order to understand if the opponent behaves randomly do i have to take into account ,his next moves after C and D of my player (stein_and_rapoport) and then check the chis-squared ? Or it is even simpler ?
  2. Strategy Random is random (ok). But i think strategy Alternator Cooperator and Defector (eg) are also random because they behave in the same way all the time . Also i believe TitForTat is not random strategy (player behaves differently according to the moves of the other player). A i right on my thinking ??

Thank you

in order to understand if the opponent behaves randomly do i have to take into account ,his next moves after C and D of my player (stein_and_rapoport) and then check the chis-squared ? Or it is even simpler ?

I believe it's a straight forward chi squared test based on the two numbers: the number of cooperations and the number of defections. A chi squared test checks those counts and infers (from the total number of counts) whether or not this is a random distribution.

Strategy Random is random (ok). But i think strategy Alternator Cooperator and Defector (eg) are also random because they behave in the same way all the time . Also i believe TitForTat is not random strategy (player behaves differently according to the moves of the other player). A i right on my thinking ??

No Alternator, Cooperater and Defector are not random. If you were playing against Cooperator the count of cooperations after 40 turns would be 40 cooperations and 0 defections. That would be statistically different to 20 cooperations and 20 defections as would be indicated by the chi squared test.

If you were playing the Random player, perhaps after 4 turns you would have 4 cooperations and 0 defetcions and (perhaps) the chi squared test would say that that is statistically different to random behaviour however after 40 rounds maybe the count would be 24 and 16 which the chi squared test would say is random.

Here is the chi-squared test in scipy: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chisquare.html

>>> from scipy.stats import chisquare
>>> test = chisquare([4, 0])
>>> test.pvalue
0.04550026389635857
>>> test = chisquare([26, 16])
>>> test.pvalue
0.12282264810139186
>>> test = chisquare([40, 40])
>>> test.pvalue
1.0
>>> test = chisquare([101, 99])
>>> test.pvalue
0.88753708398171505

Here is a plot of test.pvalue for chisquare([100 - n, n]): ie looking at all possible number of counts of cooperations and defections after 100 turns:

>>> import matplotlib.pyplot as plt
>>> ns = range(1, 101)
>>> ps = [chisquare([100 - n, n]).pvalue for n in ns]
>>> plt.plot(ns, ps)

download 10

That's showing that the pvalue is high around the time where we're near to a 50/50 split.

What a chi squared test is doing is checking if the distribution given (the count of defections and cooperations) is statistically significant to the random distribution (50/50 split). This is done by comparing the pvalue to some significance level. So if pvalue < alpha then you would say that the distribution is significantly different to the random distribution. So, if pvalue >= alpha then the opponent is playing randomly. Often a value of alpha=0.05 is used in the literature but that's just an arbitrary choice so we would need to make a choice for the strategy (I assume none can be found in the literature) and that can also be a parameter of the strategy.

Note that axl.Player has a cooperations and defections attribute that counts these things already. So using the chi squared test with the library will be straight forward:

>>> from scipy.stats import chisquare
>>> import axelrod as axl
>>> axl.seed(0)
>>> players = (axl.Cooperator(), axl.Random())
>>> match = axl.Match(players, turns=200)
>>> _ = match.play()
>>> players[0].cooperations, players[0].defections
(200, 0)
>>> chisquare([players[0].cooperations, players[0].defections]).pvalue
2.0884875837625688e-45
>>> players[1].cooperations, players[1].defections
(93, 107)
>>> chisquare([players[1].cooperations, players[1].defections]).pvalue
0.32219880616257868

Thank you for the analysis .
The only thing that is note clear in my mind is that :
If i do

chisquare([players[1].cooperations, players[1].defections]).pvalue
and i have
players = (axl.Random(), axl.TitForTat())
The pvalue of TitForTat is gonna be a big number (like 0.65) which means that we have to say the opponent (titfortat) plays randomly .
Which does not exist because he doesnt play randomly but according to titfortat strategy .
Also like titfortat for alternator (which is 50/50) the scipy is gonna give a high number . So again we will receive it like a random one .
So in both we fail because titfortat and alternator are not random .

What do i think wrong here?

You're not doing anything wrong, I think you're just pointing out a weakness of the strategy. From how it is described I think the only thing you can do is test the distribution of C and D as I have written. Because of the way the strategy in question plays it would in fact recognise that Tit for Tat is not random.

Some simple ZD ones to implement from the literature. From @marcharper on #1041:

I have found some other concrete ZD examples in case we want to add more examples from the literature:

(11/13, 1/2, 7/26, 0) from Press and Dyson
ZDmischief (0.8, 0.6, 0.1, 0) an ZDextortion (0.64, 0.18, 0.28, 0) from this paper: https://arxiv.org/pdf/1308.2576.pdf

There's a memory-two generalization in this paper on page 21, as well as the memory-one (15/16, 1/2, 7/16, 1/16): http://math.uchicago.edu/~may/REU2014/REUPapers/Li,Siwei.pdf

Looks like maybe one or two more in this paper PZDR (1.0, 0.35, 0.75, 0.1) (but looks like donation game matrix): https://pdfs.semanticscholar.org/824a/2123e1de5aa2e971fa9b1bf167b8ff246aa5.pdf
Some in this paper, see the caption for Fig 3: http://web.evolbio.mpg.de/~hilbe/Research_files/Hilbe%20et%20al%20(GEB%202015)%20Partners%20or%20rivals.pdf

Have edited the above list with a pointer to the fortran strategies.

@drvinceknight The link to Mensa strategy shows a license which says that- "Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer."

Should we be working on adding the strategies to the library, considering the license is incompatible?

The licence applies to the source code itself - not to the idea which is captured by that code. We would be in breach of the licence if we took the code and incorporated into our library, but we are perfectly ok to take code the strategy ourselves.

Hello all! What a cool project, I just discovered this a few days ago. I'd love to contribute some new strategies. Has anyone implemented a Perlin style random strategy?

Hi @JCodyA, you can check the list of references in the documentation to see if there's a matching source. If you are still unsure, please post a source and I should be able to tell if there's already a matching strategy in the library.

@marcharper I had a look at docs/reference/all_strategies.rst and the only strategy listed that was similar was the rand.py strategy, but not a perlin one. I'm thinking of two possible variations of a perlin strategy: a perlin cooperator and a perlin defector. One will cooperate on a semi random basic similar to natural randomness (ie raindrops), and the other will defect on a semi random basis.

@JCodyA I don't think there's anything quite like that -- I presume you mean that a player will say defect with some distribution other than a Bernoulli. There are several strategies that behave like say TFT and then randomly defector or otherwise act randomly or noisily, but that doesn't sound like quite the same thing. See TrickyCooperator and RandomTitForTat for examples.