Logo

The Data Daily

The Output of a Truly Random Process - DZone Big Data

The Output of a Truly Random Process - DZone Big Data

Recently, I had a discussion with my data science team about whether we can challenge the observations following a random process. Basically, data science is all about learning a hidden pattern that is affecting observations. If the observation is following a random process, then there is nothing we can learn about. Let me walk through an example to illustrate.

Let's say someone is making a claim that he is throwing a fair dice (with numbers 1 to 6) sequentially. Let's say I claim the output of my dice throw is uniformly random, i.e. with equal chances of getting a number from 1 to 6.

And then he throws the dice 12 times and shows you the output sequence. From the output, can you make a judgment whether this is really a sequential flow of a fair dice?  In other words, is the output really following a random process as expected?

Let's look at three situations:

At first glance, the output of Situation 1 looks like resulting from a random process. Situation 2 definitely doesn't look like it. Situation 3 is harder to judge. If you look at the proportion of the output numbers, the frequency of each output number of Situation 3 definitely follows a uniform distribution of a fair dice. But if you look at the number ordering, Situation 3 follows a well-defined ordering that doesn't seem to be random at all. Therefore, I don't think the output of Situation 3 is following a random process.

However, this seems to be a very arbitrary choice. Why would I look at the number ordering at all? Should I look for more properties? For example:

As you can see, it depends on my imagination... the list could go on and on. How can I tell whether Situation 3 is following a random process?

This is based on the hypothesis testing methodology. We establish null hypothesis H0 that Situation 3 follows a random process.

First, I define an arbitrary list of statistics of my choices:

Second, I run a simulation to generate 12 numbers based on a random process and calculate the corresponding statistics defined above.

The third repeats the simulation for N times and outputs the mean and standard deviation of the statistics.

If statisticA or B or C of Situation 3 are too far away (based on the likely pValue) from the mean of statistics A/B/C by the number of the standard deviation of statistics A/B/C, then we conclude that Situation 3 is not following a random process. Otherwise, we don't have enough evidence to show that our null hypothesis is violated and so we accept Situation 3 follows the random process.

This is based on the theory of predictive analytics.

First, I pick a particular Machine Learning algorithm — let's say time series forecast using ARIMA. Notice that I can also choose to use RandomForest and create some arbitrary input features (such as previous output number, maximum number in the last three numbers, etc.).

Second, I train my selected predictive model based on the output data of Situation 3 (in this example, Situation 3 has only 12 data points, but imagine we have much more than 12 data points).

Third, I evaluate my model in the test set and see whether the prediction is much better than a random guess. For example, I can measure the lift of my model by comparing the RMSE (root mean square error) or my prediction and the standard deviation of the testing data. If the lift is very insignificant, then I conclude that Situation 3 results from a random process because my predictive model doesn't learn any pattern.

Images Powered by Shutterstock