So, surprise surprise.

I wanted to use the David's library to test some properties of the matchmaking procedure of a competitive programming game (

gladiabots ) . Of course my first choice was to do it on 50g, to let the device "suffer" a bit; and because the mathematical library simplifies a lot of operations that on a pc would require specific libraries for programming languages or specific programming languages (scilab, R, and so on).

The problem is, summarized: generate a list of 20000 numbers between 1 and 8 (the id of maps in the game). Then pick randomly 1442 of those creating a second list. Then check in this second list how many times the sublist { 1 1 } (or { n n } with n € [ 1, 8 ] ) appears.

Note that 5 1 1 1 2 has 2 times the repetition of { 1 1 } . 5 [ 1 1 ] 1 2 and 5 1 [ 1 1 ] 2 .

In matchmaking terms it means "how many times, from the perspective of the player, the same map has to be played consecutively (that could be frustrating)". I wanted to compare it with a real sample. I have a sample of 20000 matches and the most active player played 1442 of those 20000.

The 50g did the test using the generation of random numbers between 1 and 8 (series of 1442 random numbers, iterated 100 times), I had to wait a bit, but it was possible. The result matched the real sample.

The I thought a bit and I proposed to the dev of the game the following. To lower the repetitions of maps and still handling all the maps in a non-predictable pattern (otherwise people may stop playing for a while to avoid maps that they do not like): make a permutation of the maps, consume it, and then make a new permutation, instead of just picking a random map each time that could lead to long repetitions.

So I set a program to concatenate random permutations of the same base list ( { 1 2 3 4 5 6 7 8 } ) . To create a final list 20000 long. Then pick 1442 random elements from it (without picking the same position twice) to sort of simulate the activity of a player (not really valid, since a player mostly can play in certain moment of the day), then check how many times the same map appears consecutively as explained above.

This is the current code (more info, check

assembla)

Code:

simSamplingPermHelp

"

input:

5: totalIterations

4: numPermutations to do in one iteration

3: numberPossibleResults to put in one permutation

2: the number of picks from the number of elements in all the permutations

of the itertion

1: listToMatch to determine that the results where picked.

output:

1: average number of appearances of the given list to match

example:

10 5 6 3 {1 1} simSamplingPerm

requirements:

- it requires the list library done by davidM

check hpmuseum.org's forum topic keywords 'processing lists!'

"

simSamplingPerm

@simSamplingFromPermutations and distributions.

@the idea: we have a permutations of a base list over and over and we

@sample from it random elements and then we see

@if the distribution changes compared to the case where all the elements

@can reappear every time.

@

@ 11:17 06.06.2017

@ I launched yesterday a test with 100 iteration

@ each which 2500 permutations of 8 elements (to match the 20000 samples of the

@ gladiabots checks). Picking 1442 elements from every iteration

@ and matching the list { 1 1 }

@ It is still running after 8+ hours. Moreover expanding 20000 elements in one list

@ I suppose is very hard for a calcualtor with 200kb of available RAM.

@ So I have to compute this with faster systems and languages (the 50g is fast

@ just one needs a proper system)

\<<

0 "sumMatchingSublist" DROP

0 "partialSumMatchingSublist" DROP

0 "randV" DROP

{} "lastRandomPerm" DROP

{} "newRandomPerm" DROP

{} "basePerm" DROP

{} "newBasePerm" DROP

0 "newBasePermLength" DROP

-1 "listToMatchSize" DROP

{ } "newSequencePicks" DROP

{ } "newSequence" DROP

{ } "numMatchesResultList" DROP

\->

@input

totalIterations

numPermutations

numberPosResults

noPickFromPerm

listToMatch

@local var

sumMatchingSublist

partialSumMatchingSublist

randV

lastRandomPerm

newRandomPerm

basePerm

newBasePerm

newBasePermLength

listToMatchSize

newSequencePicks

newSequence

numMatchesResultList

\<<

PUSH

-3 SF

-105 SF

listToMatch SIZE 'listToMatchSize' STO

numberPosResults seqList 'basePerm' STO

@the main loop

1 totalIterations

START

@ reset

{} 'newRandomPerm' STO

{} 'lastRandomPerm' STO

{} 'newBasePerm' STO

0 'partialSumMatchingSublist' STO

{} 'newSequencePicks' STO

{} 'newSequence' STO

@loop to generate the permutations that generate the new sequence to pick from.

1 numPermutations

START

@ every loop a permutation.

newRandomPerm 'lastRandomPerm' STO

DO

basePerm LSHF 'newRandomPerm' STO

UNTIL

newRandomPerm lastRandomPerm \=/

END

'newBasePerm' newRandomPerm STO+

NEXT

@we have the new permutation list

@now we decide random location of the permutation list

@those cannot be repeated for the moment.

numberPosResults numPermutations * 'newBasePermLength' STO

1 noPickFromPerm

START

DO

newBasePermLength RAND * 1 + IP

'randV' STO

UNTIL

newSequencePicks randV POS 0 ==

END

'newSequencePicks' randV STO+

@we may sort the list of picks or also not.

NEXT

@ then we pick the values

newBasePerm

1

\<<

IF

newSequencePicks NSUB POS 0 ==

THEN

@not an element in the pick sequence

DROP

END

\>>

DOSUBS

'newSequence' STO

@now we need to check how many matching sublists are there.

@if list is length 5 and sublist length 2

@we want from 1 to 4 (because then we will pick 4 and 5)

1 noPickFromPerm listToMatchSize 1 - -

FOR indV

newSequence

indV indV listToMatchSize 1 - +

@if index is 5 and sublist is length 2

@ if index is 4 we want until 5

SUB

listToMatch

IF

@if the sublists are equal

==

THEN

1 'partialSumMatchingSublist' STO+

END

NEXT

partialSumMatchingSublist 'sumMatchingSublist' STO+

'numMatchesResultList' partialSumMatchingSublist STO+

NEXT

sumMatchingSublist totalIterations /

numMatchesResultList

POP

\>>

\>>

It is useless to say that after one night it had not finished all the 100 iterations that I wanted (to average the results).

So I said to myself. I need something faster with the same rich mathematical (in this case for list operations) environment.

I tried the ti nspire, but, as I reported

here while Lua is very fast on it, its usage is very clumsly due to the lock down format of the ti nspire files. In short, either one has the ti nspire student software (through which one can create ti basic and lua programs) or one has to create the programs directly on the calculator for the exception of lua, but lua is mostly oriented on graphics. One could export the results from lua to math variables, but then one has to handle those using the calculator itself, without possibility to copy and paste on the Pc. Very, very clumsy.

So the ti nspire family for me it is out, I could not believe they made such "closed" system. It is all fine if one has stable final programs to load on the calculator, but for programs that need debug and so, it is absolutely not nice unless one has all the additional TI softwares. (at least this is valid for me when I compared it to the 50g and its workflow)

Then there is the newRPL but still I have to find a reliable way to export a program from the simulator. So even that is good for explorations but not big programs that require a lot of debug.

Therefore, before checking also HRAST basic (I do not have spare money at the moment), I decided to use the resources available on linux.

I thought again about using sqlite, but the random function of sqlite is, well, too verbose to handle to create permutations, so I decided against it.

At the end I settled on bash, that is one of the many languages that is not bad to explore (it may be useful afterwards), although quite limited (one has to create its own

random function).

I wrote the

script to run on my little headless

server (my preferred choice for little stuff). It has 260 mhz arm processor and 32 mb of ram. Mostly like the prime, but with less optimizations I believe.

Well even the bash code, that required a lot of new knowledge (so was good), was super slow. I was actually not expecting bash to take so long to generate the result.

So, since the heaviest action was likely the list permutation, I decided to test the two environments.

The LSHF of David on my 50g (170 Kb of ram available)

Code:

testingLSHFspeed

\<<

{} "basePerm" DROP

\->

@input

listSize

iterations

@local

basePerm

\<<

\<<

listSize seqList 'basePerm' STO

1 iterations

START

basePerm LSHF DROP

NEXT

\>>

TEVAL

\>>

\>>

and bash on the 260 mhz arm and 32 mbyte of ram (of which, 6 mbyte free)

Code:

permutate_array() {

# 22:15 06.06.2017 it seems working.

local input_array=( "${!1}" );

local input_array_size_int=${#input_array[@]};

declare -a permutation_res_array;

#empty array

#https://stackoverflow.com/questions/28055346/bash-unbound-variable-array-script-s3-bash

#https://stackoverflow.com/questions/10806357/associative-arrays-are-local-by-default

local random_int;

local pop_int;

# to shuffle, consuming the input putting it in random places in the output

# we are going to use the input array as stack, popping elements

for index in "${input_array[@]}"; do

random_int=$( generate_random );

random_int=$(( $random_int % $input_array_size_int )) ;

pop_int=${input_array[$random_int]};

if [[ $random_int -lt $input_array_size_int ]] ; then

#swap the elements putting the one pointed by the random value

#at the end

input_array[$random_int]=${input_array[$(( $input_array_size_int - 1))]};

fi

#expand the result

if [[ ${#permutation_res_array[@]} -gt 0 ]]; then

# to avoid complains from set -u

permutation_res_array=( "${permutation_res_array[@]}" $pop_int );

else

permutation_res_array=( $pop_int );

fi

#decrease the input

unset input_array[$(( $input_array_size_int - 1))];

input_array_size_int=$(( $input_array_size_int - 1));

done

globalretvalue=( "${permutation_res_array[@]}" );

}

testing() {

local base_perm_arr=( $( seq 1 1000 ) );

local total_interations_int=100;

for ind in $(seq 1 $total_interations_int); do

permutate_array base_perm_arr[@];

done

}

100 times shuffling a list of 1000 elements. I hope the two version of the shuffling procedure are very similar (I tried to follow the Fisher-Yate procedure).

The results surprised me. I mean with newRPL maybe I would have expected that. Surely with hpgcc, maybe with HRAST BASIC. But userRPL + sysrpl , no, I could not expect that. On the other side, aside that my code in bash could be very slow, I was not expecting such taxing operations for the 260mhz arm device (although in bash there is a lot of array duplication).

LSHF testing, shuffling 100 times a list of 1000 elements: 4014 seconds.

bash shuffling the same list: 14532 seconds.

Woah, really well done David , I could not expect to have such an advantage on the 50g with the original RPL emulation.

To do my test I surely need to improve the speed of the bash list shuffle (like generating the result array before hand and then filling it) otherwise, well, it does not work well.

edit:

My code, mostly within bash, did not help, I was still around 600 seconds for 100 shuffles of a list of 100 elements. Against 83 seconds of LSHF

Using a binary, shuf, brought the asus 500 at advantage. 8.8 seconds, but still not as much as I thought.

50g + sysrpl are very impressive.

Code:

permutate_array_shuf() {

#using the binary shuf to speed up things.

# 23:28 07.06.2017

# 100 lists 100 elements, 8.78 seconds

local input_array=( "${!1}" );

globalretvalue=( $( printf "%d\n" "${input_array[@]}" | shuf ) );

#https://unix.stackexchange.com/questions/124478/how-to-randomize-the-output-from-seq

}

testing() {

local base_perm_arr=( $( seq 1 100 ) );

local total_interations_int=100;

for ind in $(seq 1 $total_interations_int); do

permutate_array_shuf base_perm_arr[@];

# echo "${globalretvalue[@]}";

done

}