0:00:10.160,0:00:14.688 In this lecture we will see how measures [br]motivated or based on logical depth 0:00:15.056,0:00:19.120 derived from CTM can find [br]some interesting applications, 0:00:19.248,0:00:22.624 even when logical depth [br]is a much more difficult measure 0:00:22.880,0:00:26.064 than althorithmic probability [br]and algorithmic complexity 0:00:26.176,0:00:31.984 because logical depth is neither lower [br]or upper semicomputable. 0:00:32.112,0:00:36.816 So one can never know whether one[br]is really approximating it or not. 0:00:37.360,0:00:42.032 Yet one can define frameworks in which [br]logical depth is very interesting to apply. 0:00:43.480,0:00:46.848 We will first see how approximating logical depth 0:00:46.944,0:00:50.784 by using the compression times [br]with lossless compression algorithms 0:00:51.040,0:00:53.552 is something that has not been done before 0:00:53.696,0:00:56.992 and it is capable of producing [br]some interesting results. 0:00:57.344,0:01:02.752 We will also see how the compression times [br]and making perturbations in some of these experiments 0:01:02.880,0:01:06.448 conform with the theoretical expectations [br]of logical depth. 0:01:07.320,0:01:10.368 Remember that the main idea behind logical depth 0:01:10.560,0:01:15.232 is that the decompression instructions [br]for trivial or random objects 0:01:15.500,0:01:19.968 are very simple to follow [br]and are therefore executed very fast 0:01:20.416,0:01:24.410 with the compression process [br]taking only a small amount of time 0:01:24.540,0:01:27.553 simply because the compression algorithm 0:01:27.800,0:01:30.966 either compresses very well in the case of simple objects 0:01:31.106,0:01:34.293 or simply does not compress at all for random objects 0:01:34.890,0:01:40.446 and instructions for the compressing random objects[br]are almost nonexistent. 0:01:41.600,0:01:45.184 Longer run times, however, are the result of a process 0:01:45.184,0:01:49.936 following a set of time-consuming decompression instructions, 0:01:49.984,0:01:53.872 hence a deep, complex process according to logical depth. 0:01:54.496,0:01:57.904 The decompression time of the compressed version of a string 0:01:58.032,0:02:04.544 can be considered an estimation of Bennett's logical depth [br]based on the algorithm used. 0:02:06.000,0:02:11.088 There seems to have been no previous attempt[br]to implement an application of ideas 0:02:11.080,0:02:15.904 based on Bennett's logical depth or using other compression times. 0:02:16.752,0:02:20.304 But we implemented a simple framework to test some ideas 0:02:20.352,0:02:22.544 and they turned out to be very interesting. 0:02:22.912,0:02:26.896 What we did was to assess the feasibility[br]of an application of the concept 0:02:27.136,0:02:31.696 to the problem of image characterization[br]and classification by logical depth. 0:02:32.112,0:02:37.872 We performed a series of experiments [br]of gradually increasing sophistication 0:02:38.096,0:02:40.528 starting from fully controlled experiments 0:02:40.520,0:02:43.952 and proceeding to the use of the [br]best-known compression algorithms 0:02:44.112,0:02:45.904 over a large data set. 0:02:46.448,0:02:50.448 The first experiments consisted [br]in controlling all the parameters involved. 0:02:50.832,0:02:55.152 For example, adding random lines [br]would first increase the structure, but weakly, 0:02:55.152,0:02:57.150 even before reaching half the image, 0:02:57.150,0:03:00.928 of full of random black pixels and white pixels, 0:03:01.776,0:03:07.152 back to low logical depth because the image [br]started looking random and finally simple. 0:03:09.000,0:03:12.448 The battery of tests involved a series of images 0:03:12.440,0:03:16.448 devised to tune and verify different aspects of the methodology 0:03:17.008,0:03:20.256 and a more realistic data set was also used, 0:03:20.360,0:03:22.976 indicating whether the results were stable enough 0:03:22.980,0:03:26.940 to yield their same values for replication purposes 0:03:27.050,0:03:29.160 and whether they were consistent with the theory 0:03:29.168,0:03:31.648 and consonant with an intuitive sense 0:03:31.728,0:03:35.936 of complex or sophisticated versus a simple object. 0:03:36.576,0:03:39.792 Here we have a classification of images by compression length 0:03:39.790,0:03:44.864 using a compression algorithm called png croche 0:03:45.104,0:03:48.880 which is a lossless compression algorithm that compresses png images. 0:03:49.312,0:03:52.384 And the results are very stable [br]when you see another compression algorithm 0:03:52.380,0:03:55.680 such as bzip2 and Compress. 0:03:57.056,0:04:03.104 The png format is in some way ideal [br]because png is a lossless compression format, 0:04:03.360,0:04:10.512 unlike other formats such as jpg that store images [br]by applying lossy compression. 0:04:11.180,0:04:15.376 In other words, png images do not lose [br]any information of the image. 0:04:17.616,0:04:20.224 Here is the ranking by the compression times. 0:04:20.416,0:04:24.464 The execution time was given by the mathematical function timing. 0:04:24.816,0:04:29.936 The function timing evaluates an expression [br]and returns a list of the time taken in seconds, 0:04:30.128,0:04:32.080 together with the result obtained. 0:04:32.512,0:04:37.376 The function includes only CPU time spent [br]in the mathematical kernel. 0:04:38.528,0:04:43.552 The fact that several processes run concurrently [br]in computing systems 0:04:43.664,0:04:45.824 as part of their normal operation 0:04:45.888,0:04:48.320 is one of the greatest challenges faced 0:04:48.320,0:04:53.520 in attempting to measure with accuracy [br]the time that the compression process takes, 0:04:54.112,0:04:58.368 even when it is isolated from all other computer processes. 0:04:58.816,0:05:04.432 This instability is due to the fact that [br]the internal operation of the computer comes in several layers, 0:05:04.608,0:05:06.880 mostly at the operating system level. 0:05:07.120,0:05:11.968 So we killed all processes and ran the experiments [br]after dummy warm-up computations, 0:05:12.240,0:05:15.744 yielding rather stable results on the same computer. 0:05:16.410,0:05:19.546 One can see the differences in the way [br]in which compression lengths 0:05:19.546,0:05:24.520 and the compression times produced [br]results from the last two figures. 0:05:25.120,0:05:26.900 And they are in agreement with the theory 0:05:26.900,0:05:31.626 assigning low compression and low decompression time 0:05:31.626,0:05:33.620 to random and simple objects, 0:05:33.620,0:05:36.973 but high logical depth to some objects in the middle. 0:05:39.113,0:05:42.433 One can see the differences as they were theoretically expected. 0:05:42.680,0:05:48.040 On the left we have the algorithmic complexity [br]to logical depth mapping of each image, 0:05:48.140,0:05:52.960 and on the right we have groups by logical depth [br]based on the decompression times. 0:05:53.480,0:05:57.620 So we can see how simple images remained shallow for logical depth, 0:05:57.620,0:06:01.100 but random images that were incompressible, 0:06:01.293,0:06:05.340 and thus ranked high according to algorithmic complexity, 0:06:05.466,0:06:07.673 were ranked low for logical depth, 0:06:07.966,0:06:11.673 thereby exhibiting everything that has more structure 0:06:11.806,0:06:15.706 as logical depth would quantify. 0:06:16.233,0:06:22.400 The groups that resonate very much with intuition [br]of what is complex in the sense of a structure or sophistication 0:06:22.713,0:06:26.386 processes that require computational time to be produced or generated 0:06:26.430,0:06:31.653 versus other objects that do not require such computation, such as simple or random objects, 0:06:32.133,0:06:37.280 were found exactly in the middle, as predicted by logical depth. 0:06:38.860,0:06:44.280 In the next lesson we will see how many of these tools and measures, [br]in particular algorithmic probability, 0:06:44.833,0:06:49.566 can make serious contributions [br]to areas of machine learning and artificial intelligence.