0:00:00.000,0:00:13.635 34c3 intro 0:00:13.635,0:00:19.891 Herald: The next talk will be about[br]embedded systems security and Pascal, the 0:00:19.891,0:00:25.931 speaker, will explain how you can hijack[br]debug components for embedded security in 0:00:25.931,0:00:33.170 ARM processors. Pascal is not only an[br]embedded software security engineer but 0:00:33.170,0:00:39.100 also a researcher in his spare time.[br]Please give a very very warm 0:00:39.100,0:00:41.910 welcoming good morning applause to[br]Pascal. 0:00:41.910,0:00:48.010 applause 0:00:48.010,0:00:54.489 Pascal: OK, thanks for the introduction.[br]As it was said, I'm an engineer by day in 0:00:54.489,0:00:59.483 a French company where I work as an[br]embedded system security engineer. But 0:00:59.483,0:01:04.459 this talk is mainly about my spare-time[br]activity which is researcher, hacker or 0:01:04.459,0:01:10.659 whatever you call it. This is because I[br]work with a PhD student called Muhammad 0:01:10.659,0:01:17.640 Abdul Wahab. He's a third year PhD student[br]in a French lab. So, this talk will be 0:01:17.640,0:01:23.070 mainly a representation on his work about[br]embedded systems security and especially 0:01:23.070,0:01:29.990 debug components available in ARM[br]processors. Don't worry about the link. At 0:01:29.990,0:01:34.189 the end, there will be also the link with[br]all the slides, documentations and 0:01:34.189,0:01:42.479 everything. So, before the congress, I[br]didn't know about what kind of background 0:01:42.479,0:01:46.780 you will need for my talk. So, I[br]put there some links, I mean some 0:01:46.780,0:01:51.710 references of some talks where you will[br]have all the vocabulary needed to 0:01:51.710,0:01:57.490 understand at least some parts of my talk.[br]About computer architecture and embedded 0:01:57.490,0:02:03.079 system security, I hope you had attended[br]the talk by Alastair about the formal 0:02:03.079,0:02:09.440 verification of software and also the talk[br]by Keegan about Trusted Execution 0:02:09.440,0:02:17.610 Environments (TEEs such as TrustZone).[br]And, in this talk, I will also talk about 0:02:17.610,0:02:25.880 FPGA stuff. About FPGAs, there was a talk[br]on day 2 about FPGA reverse engineering. 0:02:25.880,0:02:31.180 And, if you don't know about FPGAs, I hope[br]that you had some time to go to the 0:02:31.180,0:02:37.889 OpenFPGA assembly because these guys are[br]doing a great job about FPGA open-source 0:02:37.889,0:02:46.950 tools. When you see this slide, the first[br]question is that why I put "TrustZone is 0:02:46.950,0:02:53.590 not enough"? Just a quick reminder about[br]what TrustZone is. TrustZone is about 0:02:53.590,0:03:03.600 separating a system between a non-secure[br]world in red and a secure world in green. 0:03:03.600,0:03:09.290 When we want to use the TrustZone[br]framework, we have lots of hardware 0:03:09.290,0:03:16.700 components, lots of software components[br]allowing us to, let's say, run separately 0:03:16.700,0:03:24.750 a secure OS and a non-secure OS. In our[br]case, what we wanted to do is to use the 0:03:24.750,0:03:31.450 debug components (you can see it on the[br]left side of the picture) to see if we can 0:03:31.450,0:03:39.300 make some security with it. Furthermore,[br]we wanted to use something else than 0:03:39.300,0:03:45.460 TrustZone because if you have attended the[br]talk about the security in the Nintendo 0:03:45.460,0:03:51.150 Switch, you can see that the TrustZone[br]framework can be bypassed under specific 0:03:51.150,0:03:58.970 cases. Furthermore, this talk is something[br]quite complimentary because we will do 0:03:58.970,0:04:07.900 something at a lower level, at the[br]processor architecture level. I will talk 0:04:07.900,0:04:14.730 in a later part of my talk about what we[br]can do between TrustZone and the approach 0:04:14.730,0:04:21.250 developed in this work. So, basically, the[br]presentation will be a quick introduction. 0:04:21.250,0:04:27.320 I will talk about some works aiming to use[br]debug components to make some security. 0:04:27.320,0:04:33.570 Then, I will talk about ARMHEx which[br]is the name of the system we developed to 0:04:33.570,0:04:37.640 use the debug components in a hardcore[br]processor. And, finally, some results and 0:04:37.640,0:04:46.180 a conclusion. In the context of our[br]project, we are working with System-on- 0:04:46.180,0:04:54.030 Chips. So, System-on-Chips are a kind of[br]devices where we have in the green part a 0:04:54.030,0:04:58.785 processor. So it can be a single core,[br]dual core or even quad core processor. 0:04:58.785,0:05:05.575 And another interesting part which is in[br]yellow in the image is the programmable 0:05:05.575,0:05:09.531 logic. Which is also called an FPGA[br]in this case. And 0:05:09.531,0:05:13.870 in this kind of System-on-[br]Chip, you have the hardcore processor, 0:05:13.870,0:05:23.790 the FPGA and some links between those two[br]units. You can see here, in the red 0:05:23.790,0:05:32.840 rectangle, one of the two processors. This[br]picture is an image of a System-on-Chip 0:05:32.840,0:05:38.500 called Zynq provided by Xilinx which is[br]also a FPGA provider. In this kind of 0:05:38.500,0:05:45.030 chip, we usually have 2 Cortex-A9[br]processors and some FPGA logic to work 0:05:45.030,0:05:53.910 with. What we want to do with the debug[br]components is to work about Dynamic 0:05:53.910,0:06:00.290 Information Flow Tracking. Basically, what[br]is information flow? Information flow is 0:06:00.290,0:06:07.040 the transfer of information from an[br]information container C1 to C2 given a 0:06:07.040,0:06:14.408 process P. In other words, if we take this[br]simple code over there: if you have 4 0:06:14.408,0:06:24.100 variables (for instance, a, b, w and x),[br]the idea is that if you have some metadata 0:06:24.100,0:06:31.990 in a, the metadata will be transmitted to[br]w. In other words, what kind of 0:06:31.990,0:06:39.560 information will we transmit into the[br]code? Basically, the information I'm 0:06:39.560,0:06:48.210 talking in the first block is "OK, this[br]data is private, this data is public" and 0:06:48.210,0:06:55.248 we should not mix data which are public[br]and private together. Basically we can say 0:06:55.248,0:07:00.440 that the information can be binary[br]information which is "public or private" 0:07:00.440,0:07:08.620 but of course we'll be able to have[br]several levels of information. In the 0:07:08.620,0:07:16.449 following parts, this information will be[br]called taint or even tags and to be a bit 0:07:16.449,0:07:22.070 more simple we will use some colors to[br]say "OK, my tag is red or green" just to 0:07:22.070,0:07:33.930 say if it's private or public data. As I[br]said, if the tag contained in a is red, 0:07:33.930,0:07:42.240 the data contained in w will be red as[br]well. Same thing for b and x. If we have a 0:07:42.240,0:07:48.920 quick example over there, if we look at a[br]buffer overflow. In the upper part of the 0:07:48.920,0:07:57.100 slide you have the assembly code and on[br]the lower part, the green columns will be 0:07:57.100,0:08:03.600 the color of the tags. On the right side[br]of these columns you have the status of 0:08:03.600,0:08:10.940 the different registers. This code is[br]basically: OK, when my input is red at the 0:08:10.940,0:08:19.900 beginning, we just use the tainted input[br]into the index variable. The register 2 0:08:19.900,0:08:28.210 which contains the idx variable will be[br]red as well. Then, when we want to access 0:08:28.210,0:08:36.979 buffer[idx] which is the second line in[br]the C code at the beginning, the 0:08:36.979,0:08:43.568 information we have there will be red as[br]well. And, of course, the result of the 0:08:43.568,0:08:50.101 operation which is x will be red as well.[br]Basically, that means that if there is a 0:08:50.101,0:08:57.050 tainted input at the beginning, we must[br]be able to transmit this information until 0:08:57.050,0:09:03.389 the return address of this code just to[br]say "OK, if this tainted input is private, 0:09:03.389,0:09:12.470 the return adress at the end of the code[br]should be private as well". What can we do 0:09:12.470,0:09:17.970 with that? There is a simple code over[br]there. This is a simple code saying if you 0:09:17.970,0:09:25.890 are a normal user, if in your code, you[br]just have to open the welcome file. 0:09:25.890,0:09:33.329 Otherwise, if you are a root user, you[br]must open the password file. So this is to 0:09:33.329,0:09:38.680 say if we want to open the welcome file,[br]this is a public information: you can do 0:09:38.680,0:09:45.129 whatever you want with it. Otherwise, if[br]it's a root user, maybe the password will 0:09:45.129,0:09:51.920 contain for instance a cryptographic key[br]and we should not go to the printf 0:09:51.920,0:10:01.970 function at the end of this code. The idea[br]behind that is to check that the fs 0:10:01.970,0:10:08.290 variable containing the data of the file[br]is private or public. There are mainly 0:10:08.290,0:10:13.899 three steps for that. First of all, the[br]compilation will give us the assembly 0:10:13.899,0:10:25.290 code. Then, we must modify system calls to[br]send the tags. The tags will be as I said 0:10:25.290,0:10:33.720 before the private or public information[br]about my fs variable. I will talk a bit 0:10:33.720,0:10:40.699 about that later: maybe, in future works,[br]the idea is to make or at least to compile 0:10:40.699,0:10:51.790 an Operating System with integrated[br]support for DIFT. There were already some 0:10:51.790,0:10:58.459 works about Dynamic Information Flow[br]Tracking. So, we should do this kind of 0:10:58.459,0:11:04.839 information flow tracking in two manners.[br]The first one at the application level 0:11:04.839,0:11:14.920 working at the Java or Android level. Some[br]works also propose some solutions at the 0:11:14.920,0:11:21.100 OS level: for instance, KBlare. But what[br]we wanted to do here is to work at a lower 0:11:21.100,0:11:27.730 level so this is not at the application or[br]the OS leve but more at the hardware level 0:11:27.730,0:11:34.769 or, at least, at the processor[br]architecture level. If you want to have 0:11:34.769,0:11:39.540 some information about the OS level[br]implementations of information flow 0:11:39.540,0:11:47.179 tracking, you can go to blare-ids.org[br]where you have some implementations of an 0:11:47.179,0:11:55.749 Android port and a Java port of intrusion[br]detection systems. In the rest of my talk, 0:11:55.749,0:12:05.069 I will just go through the existing works[br]and see what we can do about that. When we 0:12:05.069,0:12:10.706 talk about dynamic information flow[br]tracking at a low level, there are mainly 0:12:10.706,0:12:22.489 three approaches. The first one is the[br]one in the left-side of this slide. The idea is 0:12:22.489,0:12:29.300 that in the upper-side of this figure, we[br]have the normal processor pipeline: 0:12:29.300,0:12:38.059 basically, decode stage, register file and[br]Arithmetic & Logic Unit. The basic idea is 0:12:38.059,0:12:44.410 that when we want to process with tags or[br]taints, we just duplicate the processor 0:12:44.410,0:12:54.129 pipeline (the grey pipeline under the[br]normal one) just to process data. And, it 0:12:54.129,0:12:58.009 implies two things: First of all, we must[br]have the source code of the processor 0:12:58.009,0:13:08.720 itself just to duplicate the processor[br]pipeline and to make the DIFT pipeline. 0:13:08.720,0:13:16.399 This is quite inconvenient because we[br]must have the source code of the processor 0:13:16.399,0:13:25.160 which is not really easy sometimes.[br]Otherwise, the main advantage of this 0:13:25.160,0:13:29.929 approach is that we can do nearly anything[br]we want because we have access to all 0:13:29.929,0:13:34.839 codes. So, we can pull all wires we need[br]from the processor just to get the 0:13:34.839,0:13:41.470 information we need. On the second[br]approach (right side of the picture), 0:13:41.470,0:13:47.129 there is something a bit more different:[br]instead of having a single processor 0:13:47.129,0:13:52.459 aiming to do the normal application flow +[br]the information flow tracking, we should 0:13:52.459,0:13:58.869 separate the normal execution and the[br]information flow tracking (this is the 0:13:58.869,0:14:04.639 second approach over there). This approach[br]is not satisfying as well because you will 0:14:04.639,0:14:15.019 have one core running the normal[br]application but core #2 will be just able 0:14:15.019,0:14:22.360 to make DIFT controls. Basically, it's a[br]shame to use a processor just to make DIFT 0:14:22.360,0:14:29.829 controls. The best compromise we can do is[br]to make a dedicated coprocessor just to 0:14:29.829,0:14:35.670 make the information flow tracking[br]processing. Basically, the most 0:14:35.670,0:14:42.160 interesting work in this topic is to have[br]a main core processor aiming to make the 0:14:42.160,0:14:47.079 normal application and a dedicated[br]coprocessor to make the IFT controls. You 0:14:47.079,0:14:54.380 will have some communications between[br]those two cores. If we want to make a 0:14:54.380,0:15:01.040 quick comparison between different works.[br]If you want to run the dynamic information 0:15:01.040,0:15:09.230 flow control in pure software (I will talk[br]about that in the slide after), this is 0:15:09.230,0:15:19.809 really painful in terms of time overhead[br]because you will see that the time to do 0:15:19.809,0:15:25.329 information flow tracking in pure software[br]is really unacceptable. Regarding 0:15:25.329,0:15:30.630 hardware-assisted approaches, the best[br]advantage in all cases is that we have a 0:15:30.630,0:15:38.269 low overhead in terms of silicon area: it[br]means that, on this slide, the overhead 0:15:38.269,0:15:45.799 between the main core and the main core +[br]the coprocessor is not so important. We 0:15:45.799,0:16:00.967 will see that, in the case of my talk, the[br]dedicated DIFT coprocessor is also easier 0:16:00.967,0:16:10.410 to get different security policies. As I[br]said in the pure software solution (the 0:16:10.410,0:16:17.499 first line of this table), the basic idea[br]behind that is to use instrumentation. If 0:16:17.499,0:16:23.579 you were there on day 2, the[br]instrumentation is the transformation of a 0:16:23.579,0:16:30.049 program into its own measurement tool. It[br]means that we will put some sensors in all 0:16:30.049,0:16:36.600 parts of my code just to monitor its[br]activity and gather some information from 0:16:36.600,0:16:42.869 it. If we want to measure the impact of[br]instrumentation on the execution time of 0:16:42.869,0:16:48.129 an application, you can see in this[br]diagram over there, the normal application 0:16:48.129,0:16:53.989 level which is normalized to 1. When we[br]want to use instrumentation with it, the 0:16:53.989,0:17:06.130 minimal overhead we have is about 75%. The[br]time with instrumentation will be most of 0:17:06.130,0:17:11.888 the time twice higher than the normal[br]execution time. This is completely 0:17:11.888,0:17:18.609 unacceptable because it will just run[br]slower your application. Basically, as I 0:17:18.609,0:17:24.409 said, the main concern about my talk is[br]about reducing the overhead of software 0:17:24.409,0:17:29.880 instrumentation. I will talk also a bit[br]about the security of the DIFT coprocessor 0:17:29.880,0:17:36.679 because we can't include a DIFT[br]coprocessor without taking care of its 0:17:36.679,0:17:45.370 security. According to my knowledge, this[br]is the first work about DIFT in ARM-based 0:17:45.370,0:17:53.380 system-on-chips. On the talk about the[br]security of the Nintendo Switch, the 0:17:53.380,0:17:59.460 speaker said that black-box testing is fun[br]... except that it isn't. In our case, we 0:17:59.460,0:18:05.380 have only a black-box because we can't[br]modify the structure of the processor, we 0:18:05.380,0:18:13.810 must make our job without, let's say,[br]decaping the processor and so on. This is 0:18:13.810,0:18:21.910 an overall schematic of our architecture.[br]On the left side, in light green, you have 0:18:21.910,0:18:27.130 the ARM processor. In this case, this is a[br]simplified version with only one core. 0:18:27.130,0:18:32.630 And, on the right side, you have the[br]structure of the coprocessor we 0:18:32.630,0:18:40.720 implemented in the FPGA. You can notice,[br]for instance, for the moment sorry, two 0:18:40.720,0:18:48.070 things. The first is that you have some[br]links between the FPGA and the CPU. These 0:18:48.070,0:18:54.160 links are already existing in the system-[br]on-chip. And you can see another thing 0:18:54.160,0:19:03.680 regarding the memory: you have separate[br]memory for the processor and the FPGA. And 0:19:03.680,0:19:08.620 we will see later that we can use[br]TrustZone to add a layer of security, just 0:19:08.620,0:19:17.470 to be sure that we won't mix the memory[br]between the CPU and the FPGA. Basically, 0:19:17.470,0:19:24.240 when we want to work with ARM processors,[br]we must use ARM datasheets, we must read 0:19:24.240,0:19:29.660 ARM datasheets. First of all, don't be[br]afraid by the length of ARM datasheets 0:19:29.660,0:19:36.590 because, in my case, I used to work with[br]the ARM-v7 technical manual which is 0:19:36.590,0:19:49.251 already 2000 pages. The ARM-v8 manual is[br]about 6000 pages. Anyway. Of course, what 0:19:49.251,0:19:54.690 is also difficult is that the information[br]is split between different documents. 0:19:54.690,0:20:01.320 Anyway, when we want to use debug[br]components in the case of ARM, we just 0:20:01.320,0:20:07.740 have this register over there which is[br]called DBGOSLAR. We can see that, in this 0:20:07.740,0:20:15.400 register, we can say that writing the key[br]value 0xC5A-blabla to this field locks the 0:20:15.400,0:20:20.179 debug registers. And if your write any[br]other value, it will just unlock those 0:20:20.179,0:20:27.599 debug registers. So that was basically the[br]first step to enable the debug components: 0:20:27.599,0:20:38.840 Just writing a random value to this register[br]just to unlock my debug components. Here 0:20:38.840,0:20:44.870 is again a schematic of the overall[br]system-on-chip. As you see, you have the 0:20:44.870,0:20:50.220 two processors and, on the top part, you[br]have what are called Coresight components. 0:20:50.220,0:20:56.120 These are the famous debug components I[br]will talk in the second part of my talk. 0:20:56.120,0:21:05.680 Here is a simplified view of the debug[br]components we have in Zynq SoCs. On the 0:21:05.680,0:21:13.460 left side, we have the two processors[br](CPU0 and CPU1) and all the Coresight 0:21:13.460,0:21:21.210 components are: PTM, the one which is in[br]the red rectangle; and also the ECT which 0:21:21.210,0:21:26.460 is the Embedded Cross Trigger; and the ITM[br]which is the Instrumentation Trace 0:21:26.460,0:21:32.940 Macrocell. Basically, when we want to[br]extract some data from the Coresight 0:21:32.940,0:21:43.559 components, the basic path we use is the[br]PTM, go through the Funnel and, at this 0:21:43.559,0:21:50.750 step, we have two choices to store the[br]information taken from debug components. 0:21:50.750,0:21:55.830 The first one is the Embedded Trace Buffer[br]which is a small memory embedded in the 0:21:55.830,0:22:04.279 processor. Unfortunately, this memory is[br]really small because it's only about 0:22:04.279,0:22:10.570 4KBytes as far as I remember. But the other[br]possibility is just to export some data to 0:22:10.570,0:22:15.799 the Trace Packet Output and this is what[br]we will use just to export some data to 0:22:15.799,0:22:26.309 the coprocessor implemented in the FPGA.[br]Basically, what PTM is able to do? The 0:22:26.309,0:22:34.149 first thing that PTM can do is to trace[br]whatever in your memory. For instance, you 0:22:34.149,0:22:41.880 can trace all your code. Basically, all[br]the blue sections. But, you can also let's 0:22:41.880,0:22:47.890 say trace specific regions of the code:[br]You can say OK I just want to trace the 0:22:47.890,0:22:55.519 code in my section 1 or section 2 or[br]section N. Then the PTM is also able to 0:22:55.519,0:23:00.100 make some Branch Broadcasting. That is[br]something that was not present in the 0:23:00.100,0:23:06.919 Linux kernel. So, we already submitted a[br]patch that was accepted to manage the 0:23:06.919,0:23:14.309 Branch Broadcasting into the PTM. And we[br]can do some timestamping and other things 0:23:14.309,0:23:22.250 just to be able to store the information[br]in the traces. Basically, what a trace 0:23:22.250,0:23:27.340 looks like? Here is the most simple[br]code we could had: it's just a for loop 0:23:27.340,0:23:35.570 doing nothing. The assembly code over[br]there. And the trace will look like this. 0:23:35.570,0:23:45.070 In the first 5 bytes, some kind of start[br]packet which is called the A-sync packet 0:23:45.070,0:23:50.390 just to say "OK, this is the beginning of[br]the trace". In the green part, we'll have 0:23:50.390,0:23:56.460 the address which corresponds to the[br]beginning of the loop. And, in the orange 0:23:56.460,0:24:02.700 part, we will have the Branch Address[br]Packet. You can see that you have 10 0:24:02.700,0:24:08.299 iterations of this Branch Address Packet[br]because we have 10 iterations of the for 0:24:08.299,0:24:18.679 loop. This is just to show what is the[br]general structure of a trace. This is just 0:24:18.679,0:24:22.720 a control flow graph just to say what we[br]could have about this. Of course, if we 0:24:22.720,0:24:27.009 have another loop at the end of this[br]control flow graph, we'll just make the 0:24:27.009,0:24:31.820 trace a bit longer just to have the[br]information about the second loop and so 0:24:31.820,0:24:40.980 on. Once we have all these traces, the[br]next step is to say I have my tags but how 0:24:40.980,0:24:49.220 do I define the rules just to transmit my[br]tags. And this is there we will use static 0:24:49.220,0:24:55.880 analysis for this. Basically, in this[br]example, if we have the instruction "add 0:24:55.880,0:25:05.870 register1 + register2 and put the result[br]in register0". For this, we will use 0:25:05.870,0:25:12.779 static analysis which allows us to say that[br]the tag associated with register0 will be 0:25:12.779,0:25:19.029 the tag of register1 or the tag of[br]register2. Static analysis will be done 0:25:19.029,0:25:25.220 before running my code just to say I have[br]all the rules for all the lines of my 0:25:25.220,0:25:33.590 code. Now that we have the trace, we know[br]how to transmit the tags all over my code, 0:25:33.590,0:25:41.529 the final step will be just to make the[br]static analysis in the LLVM backend. The 0:25:41.529,0:25:46.640 final step will be about instrumentation.[br]As I said before, we can recover all the 0:25:46.640,0:25:51.809 memory addresses we need through[br]instrumentation. Otherwise, we can also 0:25:51.809,0:26:02.850 only get the register-relative memory[br]addresses through instrumentation. In this 0:26:02.850,0:26:12.179 first case, on this simple code, we can[br]instrument all the code but the main 0:26:12.179,0:26:19.909 drawback of this solution is that it will[br]completely excess the time of the 0:26:19.909,0:26:27.400 instruction. Otherwise, what we can do is[br]that with the store instruction over 0:26:27.400,0:26:33.529 there, we can get data from the trace:[br]basically, we will use the Program Counter 0:26:33.529,0:26:37.860 from the trace. Then, for the Stack[br]Pointer, we will use static analysis to 0:26:37.860,0:26:42.730 get information from the Stack Pointer.[br]And, finally, we can use only one 0:26:42.730,0:26:54.590 instrumented instruction at the end. If I[br]go back to this system, the communication 0:26:54.590,0:27:03.039 overhead will be the main drawback as I[br]said before because if we have over there 0:27:03.039,0:27:09.340 the processor and the FPGA running in[br]different parts, the main problem will be 0:27:09.340,0:27:18.090 how we can transmit data in real-time or,[br]at least, in the highest speed we can 0:27:18.090,0:27:27.460 between the processor and the FPGA. This[br]is the time overhead when we enable 0:27:27.460,0:27:35.299 Coresight components or not. In blue, we[br]have the basic time overhead when the 0:27:35.299,0:27:40.610 traces are disabled. And we can see that,[br]when we enable traces, the time overhead 0:27:40.610,0:27:50.620 is nearly negligible. Regarding time[br]instrumentation, we can see that regarding 0:27:50.620,0:27:56.780 the strategy 2 which is using the[br]Coresight components, using the static 0:27:56.780,0:28:02.429 analysis and the instrumentation, we can[br]lower the instrumentation overhead from 0:28:02.429,0:28:11.120 53% down to 5%. We still have some[br]overhead due to instrumentation but it's 0:28:11.120,0:28:18.219 really low compared to the related works[br]where all the code was instrumented. This 0:28:18.219,0:28:26.190 is an overview that shows that in the[br]grey lines some overhead of related works 0:28:26.190,0:28:31.200 with full instrumentation and we can see[br]that, in our approach (with the greeen 0:28:31.200,0:28:43.870 lines over there), the time overhead with[br]our code is much much smaller. Basically, 0:28:43.870,0:28:49.139 how we can use TrustZone with this? This[br]is just an overview of our system. And we 0:28:49.139,0:28:55.699 can say we can use TrustZone just to[br]separate the CPU from the FPGA 0:28:55.699,0:29:07.210 coprocessor. If we make a comparison with[br]related works, we can see that compared to 0:29:07.210,0:29:14.260 the first works, we are able to make some[br]information flow control with an hardcore 0:29:14.260,0:29:22.289 processor which was not the case with the[br]two first works in this table. It means 0:29:22.289,0:29:26.510 you can use a basic ARM processor just to[br]make the information flow tracking instead 0:29:26.510,0:29:33.340 of having a specific processor. And, of[br]course, the area overhead, which is 0:29:33.340,0:29:39.090 another important topic, is much much[br]smaller compared to the existing works. 0:29:39.090,0:29:44.570 It's time for the conclusion. As I[br]presented in this talk, we are able to use 0:29:44.570,0:29:50.789 the PTM component just to obtain runtime[br]information about my application. This is 0:29:50.789,0:29:56.938 a non-intrusive tracing because we still[br]have negligible performance overhead. 0:29:56.938,0:30:02.150 And we also improve the software security[br]just because we were able to make some 0:30:02.150,0:30:07.709 security on the coprocessor. The future[br]perspective of that work is mainly to work 0:30:07.709,0:30:16.020 with multicore processors and see if we can[br]use the same approach for Intel and maybe 0:30:16.020,0:30:21.100 ST microcontrollers to see if we can also[br]do information flow tracking in this case. 0:30:21.100,0:30:25.519 That was my talk. Thanks for listening. 0:30:25.519,0:30:33.171 applause 0:30:35.210,0:30:37.866 Herald: Thank you very much for this talk. 0:30:37.866,0:30:44.580 Unfortunately, we don't have time for Q&A,[br]so please, if you leave the room and take 0:30:44.580,0:30:48.169 your trash with you, that makes the angels[br]happy. 0:30:48.169,0:30:54.840 Pascal: I was a bit long, sorry.[br] 0:30:54.840,0:30:57.490 Herald: Another round[br]of applause for Pascal. 0:30:57.490,0:31:02.722 applause 0:31:02.722,0:31:07.512 34c3 outro 0:31:07.512,0:31:24.000 subtitles created by c3subtitles.de[br]in the year 2020. Join, and help us!