< Return to Video

Dissecting HDMI (33c3)

  • 0:00 - 0:14
    33C3 preroll music
  • 0:14 - 0:17
    Herald-Angel: Tim ‘mithro’ Ansell
    has come all the way from Australia
  • 0:17 - 0:25
    to talk to us about ‘Dissecting HDMI’
    and developing an open FPGA-based
  • 0:25 - 0:30
    capture hardware for sharing
    talks outside of the room.
  • 0:30 - 0:35
    And he will be explaining how
    to dissect it. And I’m looking forward
  • 0:35 - 0:39
    to hearing the talk in a second.
    And so please give Tim
  • 0:39 - 0:41
    a warm round of applause
    again. Thank you!
  • 0:41 - 0:51
    applause
  • 0:51 - 0:56
    Tim Ansell: Okay, hi, I’m Tim, and
    in theory, if my slides change
  • 0:56 - 1:03
    you would see that.
  • 1:03 - 1:09
    And I kind of have too many projects.
    And I’m gonna be discussing one of them.
  • 1:09 - 1:12
    This is another project that
    I gave a lightning talk on earlier.
  • 1:12 - 1:19
    If you didn’t see it, it’s an ARM
    microcontroller that goes in your USB port.
  • 1:19 - 1:22
    People wanted to know
    when to hack on it.
  • 1:22 - 1:26
    Tomorrow at 2 PM, apparently.
  • 1:26 - 1:29
    So, the first that I want to say is
    I’m a software developer.
  • 1:29 - 1:33
    I’m not a hardware designer,
    I’m not an FPGA developer,
  • 1:33 - 1:37
    I’m not a professional in any of that.
  • 1:37 - 1:43
    I develop software for full time.
    So this is my hobby.
  • 1:43 - 1:50
    As well, this information comes from
    a couple of projects that I started
  • 1:50 - 1:55
    but a lot of other people did the majority
    of the work and I’m just telling you
  • 1:55 - 2:00
    about it because they are too shy
    to come up and talk about it themselves.
  • 2:00 - 2:05
    So a big thank you to all these people
    who have helped me in various ways
  • 2:05 - 2:10
    regarding this. And theses slides –
  • 2:10 - 2:14
    any of the blue things are links, so
    if your playing it along at home
  • 2:14 - 2:19
    you can get to them by that URL
    and click on these things.
  • 2:19 - 2:22
    And there is probably other people I’ve
    forgotten, who are not on this list.
  • 2:22 - 2:25
    I’m very sorry.
  • 2:25 - 2:32
    So this title of this talk could be called
    “Software guy tries hardware and complains”.
  • 2:32 - 2:37
    I’ve had a really hard time figuring out
    what to call this talk.
  • 2:37 - 2:41
    And you’ll see some other attempts
    of naming this talk better.
  • 2:41 - 2:48
    So a bit of history.
    How do I end up doing HDMI stuff?
  • 2:48 - 2:56
    So TimVideos is a group of projects
    which are trying to make it easy
  • 2:56 - 3:03
    to record and live-stream user groups
    and conferences like this event.
  • 3:03 - 3:07
    However, we want to do it without
    needing the awesome team
  • 3:07 - 3:10
    that is doing the recording here.
  • 3:10 - 3:13
    These guys are really, really organized
    and professional.
  • 3:13 - 3:19
    We want to do it where people have
    no experience at all with AV,
  • 3:19 - 3:22
    and just can make it happen.
  • 3:22 - 3:29
    And so this is how you record
    a conference or user group.
  • 3:29 - 3:32
    I’m gonna be talking about
    these two things here,
  • 3:32 - 3:36
    the HDMI2USB devices that we created.
  • 3:36 - 3:43
    They are used in our setup both for
    camera capture and for capture of slides.
  • 3:43 - 3:49
    And so HDMI2USB is FOSS hardware
    for doing HDMI capture
  • 3:49 - 3:55
    and actually has a bit of history
    with the CCC
  • 3:55 - 4:01
    because it was inspired by
    a speaker who spoke here.
  • 4:01 - 4:06
    Bunnie spoke on his NeTV board
  • 4:06 - 4:09
    which was an FPGA Man-in-the-Middle attack
  • 4:09 - 4:14
    on HDCP-secured links. His talk
    is really awesome. It’s gonna be…
  • 4:14 - 4:17
    that talk is way more technical
    than mine and gives you
  • 4:17 - 4:22
    some really awesome details about the
    cool things he did to make that work.
  • 4:22 - 4:27
    Mine is much more basic. You don’t
    need much experience with HDMI
  • 4:27 - 4:33
    to follow my talk.
    Our device works like his does,
  • 4:33 - 4:39
    except his was deliberately designed
    to not allow capture,
  • 4:39 - 4:42
    our design allows capture.
    It effectively man-in-the-middles
  • 4:42 - 4:47
    the presenters projector,
    between the presenters laptop
  • 4:47 - 4:55
    and the projector, and provides
    a high-quality capture out the USB2 port.
  • 4:55 - 4:59
    I use an FPGA to do that.
    This is because
  • 4:59 - 5:03
    using FPGA makes hardware problems
    software problems, and as I said
  • 5:03 - 5:09
    I’m a software developer, I prefer
    software problems to hardware problems.
  • 5:09 - 5:14
    And the way it kind of works is
    it appears as a UVC webcam so that
  • 5:14 - 5:18
    you can use it with Skype or Hangouts
    or any of those things without needing
  • 5:18 - 5:24
    any drivers on sensible operating
    systems like MacOS and Linux.
  • 5:24 - 5:28
    On Windows you need a driver that
    tells it to use the internal driver.
  • 5:28 - 5:30
    It’s kind of weird.
    And also a serial port
  • 5:30 - 5:35
    because we have the ability to switch
    which input goes to which output.
  • 5:35 - 5:39
    It’s kind of like a matrix.
  • 5:39 - 5:43
    And so this the open source
    hardware we designed.
  • 5:43 - 5:49
    It’s in KiCAD, you can find it on Github.
  • 5:49 - 5:52
    I’m quite proud of it.
    It’s quite a good little kit.
  • 5:52 - 5:58
    We don’t use all the features of it yet,
    but it’s pretty awesome.
  • 5:58 - 6:01
    And it’s in use!
  • 6:01 - 6:06
    We used this technology to capture
    at a bunch of conferences:
  • 6:06 - 6:09
    PyCon in Australia,
    linux.conf.au in Australia
  • 6:09 - 6:12
    – as I said, I’m Australian.
  • 6:12 - 6:15
    DebConf, though, are not Australian.
  • 6:15 - 6:23
    They used it in – sorry –
    in South Africa, I think.
  • 6:23 - 6:25
    And there are a whole bunch of
    other people around the world
  • 6:25 - 6:28
    who are using this,
    which is pretty awesome.
  • 6:28 - 6:31
    The main reason I wanted it to be
    open source was so that
  • 6:31 - 6:37
    other people could use them
    and learn from them and fix problems,
  • 6:37 - 6:42
    because there are a lot of problems
    we’ve run into.
  • 6:42 - 6:45
    And the other thing is
    this is all full of Python.
  • 6:45 - 6:50
    We do use Python
  • 6:50 - 6:54
    to create the firmware for the FPGA,
    and all these other areas.
  • 6:54 - 6:58
    If you wanna find out more about that
    go to my talk at Python AU
  • 6:58 - 7:03
    which was recorded with the very device
    I’m talking about.
  • 7:03 - 7:05
    microphone noise
    Oops, sorry!
  • 7:05 - 7:12
    But as I said, this is going
    to include a lot of problems.
  • 7:12 - 7:16
    The first one is:
    People still use VGA!
  • 7:16 - 7:19
    This kind of makes me sad.
  • 7:19 - 7:24
    Because VGA is not HDMI.
    It was invented in 1987
  • 7:24 - 7:26
    and it’s an analog signal.
  • 7:26 - 7:30
    Well, HDMI shares some history
    with VGA.
  • 7:30 - 7:35
    You can’t use the same techniques
    for capturing HDMI that you can for VGA.
  • 7:35 - 7:41
    So why do you still use it?
    It’s old and bad!
  • 7:41 - 7:46
    We developed a VGA expansion board
    to effectively allow us to capture
  • 7:46 - 7:49
    VGA using the same thing.
  • 7:49 - 7:53
    By ‘developed’ I mean
    we have designed, and some exist
  • 7:53 - 7:56
    but nobody’s actually finished the
    firmware to make them work yet.
  • 7:56 - 8:00
    So, I’d love help there.
  • 8:00 - 8:04
    There is also another problem:
  • 8:04 - 8:08
    I want to do this
    all open source, as I said.
  • 8:08 - 8:12
    The HDMI ecosystem has
    commercial cores you can buy
  • 8:12 - 8:16
    and they work reasonably well,
    but you have to buy them
  • 8:16 - 8:20
    and you don’t get the source code to them
    or if you do get the source code to them
  • 8:20 - 8:23
    you can’t share them with other people.
  • 8:23 - 8:27
    As well, I wanted to be open source
    because we wanted to solve
  • 8:27 - 8:31
    all those problems that people
    have when plugging in their laptop
  • 8:31 - 8:34
    and it not working.
  • 8:34 - 8:39
    And the commercial cores aren’t designed
    to allow us to give the ability
  • 8:39 - 8:47
    to do that –
    solve those problems permanently.
  • 8:47 - 8:48
    So we create a new implementation!
  • 8:48 - 8:52
    As anybody who has ever done a
    reimplementation or a new implementation
  • 8:52 - 8:59
    or something, it means that you got
    new bugs which I all describe quite a bit.
  • 8:59 - 9:01
    So this talk could be called
  • 9:01 - 9:06
    ‘Debugging HDMI’ rather than
    ‘Dissecting HDMI’ because it includes
  • 9:06 - 9:12
    a lot of information about
    how things went wrong.
  • 9:12 - 9:17
    Ok, that’s kind of the introduction of why
    we are here and why I’m talking about this.
  • 9:17 - 9:23
    So how does HDMI work?
  • 9:23 - 9:26
    Well, HDMI is actually reasonably old now.
  • 9:26 - 9:31
    It was created in 2002.
  • 9:31 - 9:37
    It’s based on the DVI specification.
    DVI was created in 1999.
  • 9:37 - 9:41
    So DVI is 17 years old.
  • 9:41 - 9:49
    And DVI was designed to replace VGA
    and shares a lot of similar history.
  • 9:49 - 9:56
    HDMI is backwards compatible with DVI
    electrically and protocol wise,
  • 9:56 - 9:59
    but uses a different connector.
  • 9:59 - 10:03
    This is a HDMI connector.
    You’ve probably seen them all before.
  • 10:03 - 10:10
    If you look closely you see that
    there are 19 pins on the HDMI connector.
  • 10:10 - 10:13
    That’s Pin 1.
  • 10:13 - 10:15
    So what do all this pins do?
  • 10:15 - 10:18
    There are five pins
    which are used for Ground.
  • 10:18 - 10:25
    There is one pin which is used for Power,
    it gives you 5 volts at 50 milliamps.
  • 10:25 - 10:28
    This isn’t much.
    You can’t do much with 50 milliamps
  • 10:28 - 10:36
    except maybe some type of adapter,
    converter or power a whole microcontroller.
  • 10:36 - 10:39
    Some Chinese devices try
    to draw like an amp from this.
  • 10:39 - 10:45
    That’s not very good. So that’s another
    thing you should watch out for.
  • 10:45 - 10:52
    There are three high speed data pairs
    which transmit the actual video data.
  • 10:52 - 10:58
    And they share a clock pair.
    So that’s these pins here.
  • 10:58 - 11:02
    And then there are five pins
    which are used for low speed data.
  • 11:02 - 11:07
    So that’s all the pins
    on the HDMI connector.
  • 11:07 - 11:13
    You might have noticed that there was
    a whole bunch of different things
  • 11:13 - 11:17
    I said there. And you need to actually
    understand a whole bunch
  • 11:17 - 11:22
    of different protocols
    to understand how HDMI works.
  • 11:22 - 11:28
    There is bunch of low speed ones
    and there is a bunch of high speed ones.
  • 11:28 - 11:32
    I’m not gonna talk
    about all of those protocols
  • 11:32 - 11:36
    because there is just too many
    to go into an hour talk.
  • 11:36 - 11:38
    The low speed protocol
    I’m not gonna talk about is
  • 11:38 - 11:42
    CEC or Audio Return;
    and I’m not gonna talk about any of the
  • 11:42 - 11:48
    Auxiliary Data protocol
    that is high speed, or HDCP.
  • 11:48 - 11:51
    If you want HDCP go
    and look at bunnie’s talk.
  • 11:51 - 11:56
    It’s much better than mine.
    But… or ethernet!
  • 11:56 - 12:02
    What I will be talking about is
    the EDID and DDC protocols,
  • 12:02 - 12:06
    The 8b/10b encoding of the pixel data
  • 12:06 - 12:10
    and the 2b/10b encoding
    of the control data.
  • 12:10 - 12:14
    Interesting enough this is actually DVI.
  • 12:14 - 12:21
    I’m not telling you about HDMI, I’m
    really describing to you how DVI works.
  • 12:21 - 12:26
    Again, many titles.
  • 12:26 - 12:30
    Starting with the low speed protocol:
  • 12:30 - 12:33
    EDID or DDC.
  • 12:33 - 12:38
    I’m gonna use these two terms
    interchangeably,
  • 12:38 - 12:41
    they’ve been so confused now
    that they are interchangeable,
  • 12:41 - 12:44
    in my opinion.
  • 12:44 - 12:47
    This is something they inherited from VGA.
  • 12:47 - 12:53
    It was invented and added
    to VGA in August 1994.
  • 12:53 - 12:57
    It was for plug and play of monitors
    so that you could plug in your monitor
  • 12:57 - 13:02
    and your graphics card would just work
    rather than requiring you to tell
  • 13:02 - 13:08
    your graphics card exactly what resolution
    and stuff your monitor worked at.
  • 13:08 - 13:13
    It uses I2C [I squared C]
    and a small EEPROM.
  • 13:13 - 13:16
    These are the pins that it uses.
  • 13:16 - 13:23
    15 is the Clock pin and
    16 is the Data pin,
  • 13:23 - 13:28
    and then it uses the Ground, and the
    5 Volts is used to power that EEPROM.
  • 13:28 - 13:33
    And in some ways it also uses the 19
    because 19 is how you detect
  • 13:33 - 13:38
    that there is something there
    to read from.
  • 13:38 - 13:41
    It uses I2C.
  • 13:41 - 13:46
    I2C is a low speed protocol that runs
    at either 100 kHz or 400 kHz.
  • 13:46 - 13:51
    Technically EDID is not I2C,
    but it is.
  • 13:51 - 13:56
    It only supports the 100 kHz version,
    though, in theory,
  • 13:56 - 13:59
    everything on this planet
    can be read at 400 kHz.
  • 13:59 - 14:03
    It is also very well explained elsewhere,
    so I’m not going to explain in detail
  • 14:03 - 14:09
    what I2C is or does,
    or how to implement it.
  • 14:09 - 14:16
    The EEPROM is a 24 series.
    It’s found at I2C address 50.
  • 14:16 - 14:22
    It’s 8 bits in size which gives
    you 256 bytes of data.
  • 14:22 - 14:28
    Again, this EEPROM and how to talk to it
    is very well described on internet.
  • 14:28 - 14:31
    So I’m not gonna to describe it here.
    If you’ve used EEPROMs
  • 14:31 - 14:36
    over I2C it’s likely
    you’ve used a 24 series EEPROM.
  • 14:36 - 14:43
    Probably bigger ones,
    256 bytes is pretty small.
  • 14:43 - 14:46
    So like a 16 bits one,
  • 14:46 - 14:52
    but EDID supports only
    the 8 bits ones.
  • 14:52 - 14:55
    The kind of interesting part of EDID
    is the data structure:
  • 14:55 - 14:59
    it’s a custom binary format that describes
  • 14:59 - 15:02
    what the contents of the EEPROM is.
  • 15:02 - 15:05
    Again, Wikipedia has a really
    good description of this.
  • 15:05 - 15:08
    So I’m not gonna go
    into much detail.
  • 15:08 - 15:13
    But the important thing is that it
    describes resolution, frequency
  • 15:13 - 15:18
    and format for talking to the monitor.
  • 15:18 - 15:21
    This is really important because
    if you try and send
  • 15:21 - 15:25
    the wrong resolution, frequency or format
    the monitor is not gonna understand it.
  • 15:25 - 15:29
    This is kind of what EDID is used for.
  • 15:29 - 15:32
    sipping, sound of water bottle
  • 15:32 - 15:38
    So this is where things
    start getting a bit hairy.
  • 15:38 - 15:43
    Presenters come up to the front and
    the first question you see anybody ask is:
  • 15:43 - 15:45
    What resolution do I use?
  • 15:45 - 15:50
    And they get a panel like this which
    has a bazillion of resolutions selected.
  • 15:50 - 15:55
    And the thing is, despite your monitor
    saying that it supports
  • 15:55 - 16:00
    many formats they lie.
  • 16:00 - 16:05
    It turns out that projectors lie
    a lot more than normal displays.
  • 16:05 - 16:08
    I don’t know why they are special.
  • 16:08 - 16:12
    So this is what a supported format
    looks like.
  • 16:12 - 16:15
    It’s really great.
  • 16:15 - 16:19
    As well, I care about
    capturing the data.
  • 16:19 - 16:25
    And so I want the things
    in the format that is
  • 16:25 - 16:27
    easy for me to capture.
  • 16:27 - 16:32
    I also don’t to be scaling
    peoples images and text
  • 16:32 - 16:35
    because scaling looks really bad.
    So if someone selects
  • 16:35 - 16:40
    like a really low resolution and
    we scale it up it looks really horrible.
  • 16:40 - 16:44
    It makes text unreadable; and
    presenters are very denounced,
  • 16:44 - 16:48
    especially at technical conferences,
    for using tiny, tiny fonts.
  • 16:48 - 16:52
    And so we need to use as much
    resolution as we can.
  • 16:52 - 16:56
    How we solve this is we emulate
    our own EEPROM in the FPGA
  • 16:56 - 16:59
    and ignore what the projector
    tells us it can do.
  • 16:59 - 17:03
    We tell the presenter that this is
    what we support.
  • 17:03 - 17:07
    You might notice that it kind of
    solves the problem
  • 17:07 - 17:11
    of what resolution we do.
  • 17:11 - 17:12
    Offer a single solution…
  • 17:12 - 17:16
    offer a single option makes it
    very hard to choose the wrong one.
  • 17:16 - 17:20
    That’s good! We solved the problem!
  • 17:20 - 17:23
    No, we haven’t solved the problem.
  • 17:23 - 17:25
    We were recording PyCon AU
    and we found that
  • 17:25 - 17:31
    some Mac laptops were
    refusing to work.
  • 17:31 - 17:35
    To understand the cause of this
    you need to understand
  • 17:35 - 17:37
    a little bit about how the world works.
  • 17:37 - 17:41
    There are two major frequencies
    in the world: 50 Hz and 60 Hz.
  • 17:41 - 17:44
    50 Hz is mainly used
    in the “Rest of the World”
  • 17:44 - 17:47
    and 60 Hz is used in America
    and Japan and a few other places
  • 17:47 - 17:52
    but that’s kind of a very rough thing.
  • 17:52 - 17:55
    Laptop sold in Australia,
    Australia is 50 Hz.
  • 17:55 - 17:58
    It’s part of the “Rest of the World”.
    You’d think the that the laptop
  • 17:58 - 18:02
    could do 50 Hz. Plus everything
    is global these days, right?
  • 18:02 - 18:08
    I can plug in my power pack
    for my laptop in the US or Australia,
  • 18:08 - 18:12
    like, it should work everywhere right!
  • 18:12 - 18:15
    No. Sad!
  • 18:15 - 18:20
    We solved it by claiming
    that we were American
  • 18:20 - 18:25
    and supporting 60 frames per second
    rather than 50 frames per second.
  • 18:25 - 18:28
    So I guess a display
    with an American accent.
  • 18:28 - 18:33
    We deployed this hotfix
    on the Friday evening.
  • 18:33 - 18:37
    And on Saturday all the problems
    that we were having on Friday
  • 18:37 - 18:42
    went away. So this is kind of
    the power of a open source solution
  • 18:42 - 18:46
    and having complete control
    over your hardware.
  • 18:46 - 18:50
    Nowadays we actually offer both 60 and 50
  • 18:50 - 18:54
    because for display capture
    if you’re displaying stuff
  • 18:54 - 19:00
    at 50 frames per second you’re
    probably speaking a lot faster than I am.
  • 19:00 - 19:06
    It’s really weird, these
    128 bytes are really hard
  • 19:06 - 19:12
    and the number one cause
    of why a persons laptop
  • 19:12 - 19:15
    can’t talk to the projector.
  • 19:15 - 19:18
    It gets a trophy!
  • 19:18 - 19:24
    To try and figure out why that is
    we created EDID.tv.
  • 19:24 - 19:27
    It’s supposed to be
    a repository of EDID data.
  • 19:27 - 19:31
    There is a Summer of Code project,
    Python/Django/Bootstrap
  • 19:31 - 19:34
    and an EDID grabber tool that
    you can run on your laptop.
  • 19:34 - 19:37
    I’d love help making this work better.
  • 19:37 - 19:42
    Hasn’t had much love since
    the Summer of Code student made that work.
  • 19:42 - 19:46
    But it would be really nice to have an
    open database of everybody’s EDID data
  • 19:46 - 19:51
    out there. There are a bunch
    of closed ones. I can pay to buy one,
  • 19:51 - 19:56
    but I’d really love to have an open one.
  • 19:56 - 20:00
    As well maybe we don’t need
    the whole capture solution,
  • 20:00 - 20:03
    maybe you can just override the EDID.
  • 20:03 - 20:09
    The C3VOC here actually developed
    a version that overrides EDID for VGA.
  • 20:09 - 20:12
    I have a design which works for HDMI.
  • 20:12 - 20:20
    It just uses a low cost microprocessor
    to pretend to be an EEPROM.
  • 20:20 - 20:23
    As well, DisplayPort is not HDMI.
    Don’t get these two confused,
  • 20:23 - 20:26
    they are very, very different protocols.
  • 20:26 - 20:30
    They have an Auxiliary Channel
    like EDID and CEC.
  • 20:30 - 20:34
    I have boards to decode them
    here at CCC.
  • 20:34 - 20:37
    So if you’re interested in that
    come and talk to me
  • 20:37 - 20:43
    because we would really like to do
    similar things for DisplayPort.
  • 20:43 - 20:47
    That is the slow speed data.
  • 20:47 - 20:50
    Sip from bottle
  • 20:50 - 20:53
    What about the high speed data?
  • 20:53 - 20:58
    Each pixel on your screen is
  • 20:58 - 21:04
    basically three colors
    in DVI standard: Red, Green, Blue.
  • 21:04 - 21:08
    And each one is a byte in size.
  • 21:08 - 21:16
    Each of the colors is mapped to
    a channel on the HDMI connector.
  • 21:16 - 21:20
    You can kind of see the Red and
    the Green and the Blue channels.
  • 21:20 - 21:23
    Each channel is differential pair.
  • 21:23 - 21:27
    You get a Plus and a Negative
    and a Shield.
  • 21:27 - 21:35
    And they use twisted pair to try
    and reduce the noise reception of these,
  • 21:35 - 21:39
    because these are quite high speed.
  • 21:39 - 21:42
    And they have a dedicated Shield to
    try and – again – reduce the noise
  • 21:42 - 21:47
    that is captured.
  • 21:47 - 21:53
    This is kind of where it gets to
    the ‘differential signaling’ part
  • 21:53 - 21:58
    of the ‘TMDS’ that is
    the kind of code name
  • 21:58 - 22:05
    for the internal protocol that is used
    on the high speed data.
  • 22:05 - 22:10
    They also…
    all these channels share a Clock.
  • 22:10 - 22:14
    That clock is called the Pixel Clock.
  • 22:14 - 22:17
    But each of these channels
    is a serial channel.
  • 22:17 - 22:23
    It transmits data at 10 bits.
  • 22:23 - 22:26
    They… every 10 bits – sorry,
  • 22:26 - 22:32
    every clock cycle there are 10 bits of data
    transmitted on each of these channels.
  • 22:32 - 22:35
    There is a shared clock and
    each of the channels is running
  • 22:35 - 22:39
    at effectively
    ten times that shared clock.
  • 22:39 - 22:42
    This is kind of what
    the whole system looks like.
  • 22:42 - 22:45
    You have your Red, Green, Blue channels.
  • 22:45 - 22:49
    You take your 8 bits of input data
    on each channel
  • 22:49 - 22:54
    and you convert it to the 10 bits
  • 22:54 - 22:57
    that we’re going to transmit,
    and it goes across the cable
  • 22:57 - 23:01
    and then we decode it on the other side.
  • 23:01 - 23:07
    The question is: what does
    the 8 bit to 10 bit encoding
  • 23:07 - 23:12
    look like and how do you understand that.
  • 23:12 - 23:17
    It is described by this diagram here.
    It’s a bit small so I’ll bring it up.
  • 23:17 - 23:23
    This is what it looks like.
    Yes… sure…
  • 23:23 - 23:28
    …what? This diagram – like –
  • 23:28 - 23:32
    I’ve spent hours looking at this,
    and it is an extremely hard diagram
  • 23:32 - 23:39
    to decode.
    It’s very, very hard to understand.
  • 23:39 - 23:43
    And it turns out the encoding
    protocol is actually quite easy!
  • 23:43 - 23:48
    It’s three easy steps – approximately.
  • 23:48 - 23:52
    So I’m going to show you all how
    to write an encoder or a decoder.
  • 23:52 - 23:55
    That diagram is just for the encoder.
  • 23:55 - 24:01
    They have a similar diagram that
    is not the inverse of this for decoding.
  • 24:01 - 24:04
    Again, almost impossible to read.
  • 24:04 - 24:07
    The three steps: First we’re
    going to do ‘Control’ or ‘Pixel’,
  • 24:07 - 24:11
    choose which one to do. Then we’re
    going to either encode Control data
  • 24:11 - 24:16
    or encode Pixel data.
  • 24:16 - 24:21
    A couple of important points
    to go through first:
  • 24:21 - 24:24
    The Input data
    – no matter how wide it is –
  • 24:24 - 24:29
    is converted to 10 bit symbols.
  • 24:29 - 24:32
    Data goes to symbols.
    When we’re talking about them
  • 24:32 - 24:37
    being transmitted we talk about them
    in symbols, when it’s decoded into pixels
  • 24:37 - 24:40
    we talk about them in data.
  • 24:40 - 24:46
    As well, things need
    to be kept DC-balanced.
  • 24:46 - 24:48
    I’ve rushed ahead.
  • 24:48 - 24:53
    The question is: “Why 10 bits?”
    Our pixels were 8 bits.
  • 24:53 - 24:56
    I will explain why
    in the Pixel Data section.
  • 24:56 - 24:59
    But it’s important that all our symbols
    are the same size.
  • 24:59 - 25:05
    We’re always transmitting 10 bits
    every clock cycle.
  • 25:05 - 25:09
    Keeping DC-balanced:
  • 25:09 - 25:13
    long runs of 1s and 0s are bad.
  • 25:13 - 25:16
    There are lots of reasons for this.
  • 25:16 - 25:22
    I tend to think of it like
    HDMI isn’t AC coupled
  • 25:22 - 25:27
    but you can kind of think of it
    like AC coupled.
  • 25:27 - 25:30
    It’s not to recover Clock.
  • 25:30 - 25:35
    We have a clock pair that is used
    to give our Clock signal.
  • 25:35 - 25:38
    There are lots of lies on internet
    that say that the reason
  • 25:38 - 25:43
    we’re going to keep DC balance
    is because of Clock.
  • 25:43 - 25:48
    But no, that’s not the case.
  • 25:48 - 25:52
    So what does DC balance mean?
  • 25:52 - 25:57
    A symbol which has lots of 1s
    or lots of 0s
  • 25:57 - 26:01
    is going to be considered DC-biased
  • 26:01 - 26:05
    if it has more 1s than 0s.
  • 26:05 - 26:09
    This is kind of what it’s like:
    this symbol here
  • 26:09 - 26:13
    has lots of 1s and if you add up
    all the 1s you can see it’s got
  • 26:13 - 26:17
    quite a positive bias.
    If it was inverse and had lots of 0s
  • 26:17 - 26:20
    it would have a negative DC bias.
  • 26:20 - 26:27
    That cause… that DC bias over time
    causes us problems.
  • 26:27 - 26:32
    That are the two important things we have
    to keep in mind when looking at the rest.
  • 26:32 - 26:34
    sound of bottle sip
  • 26:34 - 26:38
    The first thing we need to figure out is
    are we transmitting Control data
  • 26:38 - 26:40
    or Pixel data.
  • 26:40 - 26:44
    Turns out that what is happening
    in your display is,
  • 26:44 - 26:47
    we are transmitting something
    that’s actually bigger
  • 26:47 - 26:51
    than what you
    see on your screen.
  • 26:51 - 26:57
    This not the scale. The Control data
    periods are much, much smaller.
  • 26:57 - 27:06
    The Control data is in orange
    and the Pixel data is in purple-pink.
  • 27:06 - 27:12
    So why does this exist?
    It exists because of old CRT monitors.
  • 27:12 - 27:17
    And for those in the audience
    who where kind of born after CRT monitors,
  • 27:17 - 27:20
    this is what they look like.
  • 27:20 - 27:23
    The way they work is,
    they have an electron beam
  • 27:23 - 27:28
    that scans across,
    highlighting the phosphorus.
  • 27:28 - 27:35
    This electron beam can’t just be…
    get back to other side of the screen
  • 27:35 - 27:39
    straight away, or get back to the top of
    the screen. And so these periods
  • 27:39 - 27:44
    where we are transmitting Control data
    was to allow the electron beam
  • 27:44 - 27:47
    to get back to the location
    where it needed to start
  • 27:47 - 27:53
    transmitting the next set of data.
  • 27:53 - 27:56
    That’s why it exists.
    Why do we care?
  • 27:56 - 27:59
    Because the encoding schemes
    for Control and Pixel data
  • 27:59 - 28:04
    are actually quite different.
  • 28:04 - 28:07
    This is the main difference.
    I’m going to come back to this slide
  • 28:07 - 28:12
    a bit later. But again, the
    important thing to see here is
  • 28:12 - 28:15
    that despite the encoding scheme
    being quite different
  • 28:15 - 28:22
    the output is 10 bits in size.
  • 28:22 - 28:25
    That first step – choosing whether
    it’s Pixel or Control data –
  • 28:25 - 28:30
    is described by this bit of the diagram.
    You might notice that’s
  • 28:30 - 28:34
    not the first thing in the diagram.
  • 28:34 - 28:38
    How do you convert Control data
    to Control symbols?
  • 28:38 - 28:41
    First we need to know what
    Control data is. There are two bits,
  • 28:41 - 28:44
    there is the HSync and the VSync signal.
  • 28:44 - 28:51
    They provide basically
    the horizontal and vertical pixel sizes.
  • 28:51 - 28:55
    They are kind of left over from VGA.
    We don’t actually need them
  • 28:55 - 29:01
    in HDMI or DVI to know
    where the edges are
  • 29:01 - 29:07
    because we can tell the difference
    between Control and Pixel data.
  • 29:07 - 29:12
    But they kind of still exist
    because of backwards compatibility.
  • 29:12 - 29:16
    This means that we have two bits of data
    that we need to convert to 10 bits of data.
  • 29:16 - 29:22
    So, a 2b/10b scheme.
  • 29:22 - 29:27
    How they do it is they just hand-picked
    four symbols that were going to be
  • 29:27 - 29:30
    these Control data symbols.
  • 29:30 - 29:35
    These are the four symbols. There’s
    some interesting properties with them.
  • 29:35 - 29:39
    They are chosen to be DC-balanced.
    They roughly have the same number
  • 29:39 - 29:47
    of 0s and 1s. So we don’t have to worry about
    the DC bias with these symbols very much.
  • 29:47 - 29:52
    They are also chosen to have
    seven or more transitions from 0 to 1
  • 29:52 - 29:59
    in them. This number of transitions
  • 29:59 - 30:03
    is used to understand
    the phase relationship
  • 30:03 - 30:07
    of the different channels.
    So if you remember this diagram,
  • 30:07 - 30:13
    we have a cable going between
    the transmitter and the transceiver.
  • 30:13 - 30:16
    These are, again, very high speed signals.
  • 30:16 - 30:22
    And even if the transmitter was
    transmitting everything at the same time,
  • 30:22 - 30:28
    the cable isn’t ideal and might
    delay some of the symbols.
  • 30:28 - 30:33
    The bits on one channel
    [might take] longer than others.
  • 30:33 - 30:37
    By having lots of these transmissions
    we can actually find
  • 30:37 - 30:42
    the phase relationship between
    each of the channels and then
  • 30:42 - 30:48
    recover the data. And so
    that’s why these Control symbols
  • 30:48 - 30:52
    have a large number
    of transitions in them.
  • 30:52 - 30:57
    More on that later when we get to the
    implementation. And I’m running out’ time.
  • 30:57 - 31:01
    This part of the diagram is the
    Control data encoding.
  • 31:01 - 31:04
    sip from bottle
  • 31:04 - 31:07
    What about Pixel data
    and the Pixel symbols?
  • 31:07 - 31:14
    Again, in DVI each channel
    of the Pixel is 8 bits.
  • 31:14 - 31:17
    And the encoding scheme is described
    by the rest of the diagram.
  • 31:17 - 31:22
    But again, it’s actually
    really, really simple.
  • 31:22 - 31:27
    This encoding scheme is called 8b/10b,
  • 31:27 - 31:30
    because it takes 8 bits
    converting it to 10 bits.
  • 31:30 - 31:34
    However, there is a huge danger
    here because IBM also invented
  • 31:34 - 31:37
    the 8b/10b scheme
    that is used in everything.
  • 31:37 - 31:41
    This is used in DisplayPort, it’s used
    in PCI Express, it’s used in SATA,
  • 31:41 - 31:44
    it’s used in pretty much everything
    on the planet.
  • 31:44 - 31:48
    This is not the encoding TDMS uses.
  • 31:48 - 31:52
    You can lose a lot of time
    trying to map this diagram
  • 31:52 - 31:57
    to the IBM coding scheme,
    and going these are not the same.
  • 31:57 - 32:03
    That is because they’re not the same.
    This is a totally different coding scheme.
  • 32:03 - 32:08
    Encoding Pixel data is a two-step process.
    I did say it was three-ish steps
  • 32:08 - 32:12
    to do this.
    The first step is we want to reduce
  • 32:12 - 32:18
    the transitions in the data.
  • 32:18 - 32:20
    How do we do this? –
    Sorry, why do we do this?
  • 32:20 - 32:24
    Because this, again, is
    a high speed channel.
  • 32:24 - 32:28
    We want to reduce the cross-talk
    between the lanes.
  • 32:28 - 32:31
    They are actually quite close
    to each other.
  • 32:31 - 32:35
    So by reducing the number
    of transitions we can reduce
  • 32:35 - 32:40
    the probability that the signal propagates
  • 32:40 - 32:46
    from one channel to the next.
    And how we do it?
  • 32:46 - 32:50
    We’re gonna choose one
    of two encoding schemes.
  • 32:50 - 32:54
    An XOR encoding scheme
    or an XNOR encoding scheme.
  • 32:54 - 32:58
    How do we do the XOR encoding scheme?
    It’s actually pretty simple.
  • 32:58 - 33:01
    We set the Encoded Bit
    same as the first Data Bit
  • 33:01 - 33:04
    and then the next Encoded Bit
    is the first Encoded Bit
  • 33:04 - 33:10
    XORed with the Data bit.
  • 33:10 - 33:14
    And then we just repeat until
    we’ve done the 8 bits.
  • 33:14 - 33:16
    So this is how we do the XOR encoding.
  • 33:16 - 33:20
    The XNOR encoding is the same process,
    except instead of using XOR
  • 33:20 - 33:24
    it uses XNOR.
  • 33:24 - 33:29
    How do we choose
    which one of these to use?
  • 33:29 - 33:35
    If the Input Data byte
    has fewer than four 1s
  • 33:35 - 33:40
    we use the XOR. If it has more
    than four 1s we use the XNOR.
  • 33:40 - 33:43
    And then there’s a tie-break (?)
    if you have even.
  • 33:43 - 33:48
    The important thing here is that this
    method is determined by the Data byte only.
  • 33:48 - 33:53
    There is no hidden state here
    or continuous change.
  • 33:53 - 34:00
    Every pixel has a one-to-one
    mapping to an encoding.
  • 34:00 - 34:04
    Then we append a bit
    on the end that indicates
  • 34:04 - 34:09
    whether we chose XOR,
    XNOR encoding of that data.
  • 34:09 - 34:15
    And so that converts
    our 8 bits Input Pixels
  • 34:15 - 34:22
    to 9 bits of encoded data, effectively
    our 8-bit encoded sequence
  • 34:22 - 34:28
    and then one bit to indicate whether
    we chose XOR, or XNOR encoding
  • 34:28 - 34:34
    for that Data bit. So that’s it there.
  • 34:34 - 34:38
    This encoding is actually very good
    at reducing transitions.
  • 34:38 - 34:44
    On average, we had roughly
    eight transitions previously,
  • 34:44 - 34:49
    now we have roughly three-ish,
    so it’s pretty cool.
  • 34:49 - 34:51
    I have no idea how they figured this out.
  • 34:51 - 34:56
    I’m assuming some very smart
    mathematicians where involved
  • 34:56 - 35:00
    because discovering this is beyond me.
  • 35:00 - 35:02
    And that describes the top part
    of this process.
  • 35:02 - 35:04
    sounds of scratching nose and beard
  • 35:04 - 35:12
    This is where, in the TMDS, the
    Transition Minimization comes from,
  • 35:12 - 35:14
    that step there in the encoding process.
  • 35:14 - 35:16
    But there is still one more step.
  • 35:16 - 35:22
    We need to keep the channel
    DC-balanced, as I explained earlier.
  • 35:22 - 35:28
    How can we do that? Because
    not all pixels are guaranteed to be
  • 35:28 - 35:32
    at zero DC bias
    like the Control symbols are.
  • 35:32 - 35:37
    We do it by keeping a running count
    of the DC bias we have,
  • 35:37 - 35:42
    and then, if we have a positive DC bias
  • 35:42 - 35:46
    and the symbol is also
    positively biased, we invert it.
  • 35:46 - 35:52
    Or, if we have a negative DC bias
    and the symbol has a negative DC bias,
  • 35:52 - 35:56
    we invert it.
    And the reason we do this is
  • 35:56 - 36:01
    because when we invert a symbol we
    convert all the 1s to 0s which means
  • 36:01 - 36:06
    a negative DC bias
    becomes a positive DC bias.
  • 36:06 - 36:11
    As I said, we chose – because we are already
    negative and the thing was negative –
  • 36:11 - 36:18
    we convert it to plus. It means we’re
    going to drive the running DC bias value
  • 36:18 - 36:23
    back towards zero.
    We might overshoot, but the next stage
  • 36:23 - 36:27
    we’ll keep trying to oscillate up and
    down, and on average over time
  • 36:27 - 36:31
    we keep a DC bias of zero.
  • 36:31 - 36:37
    And as I said. Then, to indicate
    whether or not we inverted
  • 36:37 - 36:43
    or kept… the…
    straight through we inverted,
  • 36:43 - 36:48
    we add another bit on the end.
    So that’s how get our 10 bit
  • 36:48 - 36:54
    encoding scheme.
    We have the 8 bits of encoded data,
  • 36:54 - 36:59
    then one bit indicating whether or not
    it used XOR/XNOR encoding,
  • 36:59 - 37:04
    and then one bit to indicate whether
    or not we inverted the symbol.
  • 37:04 - 37:10
    That describes this bottom part
    of the chart.
  • 37:10 - 37:15
    Now you can see partly
    why this chart is kind of confusing.
  • 37:15 - 37:19
    It’s no way in what I think
    of a what’s a logical diagram.
  • 37:19 - 37:22
    This might be how you implement it
    in hardware if you really understand
  • 37:22 - 37:29
    the protocol, but not a very good diagram
    for explaining what’s going on. And…
  • 37:29 - 37:32
    sip from bottle
  • 37:32 - 37:34
    As you see it’s actually pretty simple.
  • 37:34 - 37:40
    In summary this is
    the interesting information
  • 37:40 - 37:45
    about the two different encoding schemes.
  • 37:45 - 37:49
    Because we minimized
    the transitions in the Pixel data
  • 37:49 - 37:53
    we can actually tell
    Control and Pixel data apart
  • 37:53 - 37:56
    by looking at how many transitions
    are in the symbol.
  • 37:56 - 38:01
    If it has six or more transitions
    it must be a Control symbol.
  • 38:01 - 38:06
    If it has four or less
    it must be a Pixel symbol.
  • 38:06 - 38:10
    You now know
    how to encode TDMS data
  • 38:10 - 38:12
    and how to decode TDMS data
  • 38:12 - 38:18
    because if you want to decode
    you just do the process backwards.
  • 38:18 - 38:25
    Congratulations!
    How do you actually implement this?
  • 38:25 - 38:28
    You can just write the XOR logic
  • 38:28 - 38:31
    and a little counter
    that keeps track of the DC bias
  • 38:31 - 38:35
    and all that type of thing
    in FPGA.
  • 38:35 - 38:39
    I’m not going to describe that
    because I don’t have much time.
  • 38:39 - 38:43
    But if you followed the process
    that I have given you
  • 38:43 - 38:46
    it should be pretty easy.
  • 38:46 - 38:51
    But… and this is what we use currently.
  • 38:51 - 38:54
    You could actually use a lookup table.
    What we are doing is
  • 38:54 - 38:58
    converting 8 bits of data
    to 10 bits of data.
  • 38:58 - 39:04
    That is a lookup table process,
    pretty easy.
  • 39:04 - 39:09
    FPGAs are really good at
    doing ‘lookup table’-type processes,
  • 39:09 - 39:13
    and it also allows you then
    to extend this system
  • 39:13 - 39:18
    to those other protocols
    like the 4b/10b that is used
  • 39:18 - 39:21
    for the Auxiliary data.
  • 39:21 - 39:25
    So we are looking at that in the future.
    It uses a few more resources
  • 39:25 - 39:28
    but it’s a lot more powerful.
  • 39:28 - 39:33
    This is kind of what your encoder
    will look like, and your decoder.
  • 39:33 - 39:37
    It’s quite simple,
    it takes in your 10 bits of data
  • 39:37 - 39:39
    and outputs either
    your 8 bits of Pixel data
  • 39:39 - 39:44
    or your 2 bits of Control data
    and the data type.
  • 39:44 - 39:47
    This is kind of what if you went
    into our design and looked at it
  • 39:47 - 39:50
    at high level, in the schematic,
  • 39:50 - 39:52
    you’d probably see a block
    that looks like this.
  • 39:52 - 39:56
    The encoder is slightly more complicated
    because you also have the DC bias count
  • 39:56 - 40:01
    that you have to keep track of.
    But, again,
  • 40:01 - 40:04
    the data goes in
    and the data comes out.
  • 40:04 - 40:09
    That’s simple, right?
  • 40:09 - 40:12
    This kind of extends to Auxiliary data,
    or if you get an error,
  • 40:12 - 40:15
    like if you…
    There are 124 symbols
  • 40:15 - 40:19
    that you can have in 10 bits of data.
  • 40:19 - 40:22
    Not all of them are valid.
    So if you get one of these invalid symbols
  • 40:22 - 40:26
    you know you have an error.
  • 40:26 - 40:30
    However, things happen quite quickly
  • 40:30 - 40:35
    when you times them by ten.
    And so our Pixel Clock
  • 40:35 - 40:39
    for 640x480 is 25 MHz.
    When you times that by ten
  • 40:39 - 40:45
    you get 250 MBits per channel.
    When you’re doing 720p
  • 40:45 - 40:49
    you’re doing 750 MBits per channel.
  • 40:49 - 40:54
    And 1080p is at 1500 MBits per channel.
  • 40:54 - 40:59
    An FPGA is fast, but
    they’re not really that fast
  • 40:59 - 41:04
    at a range that I can afford to buy.
    I’m sure the military has ones
  • 41:04 - 41:08
    that go this fast, but
    I’m not as rich as them.
  • 41:08 - 41:12
    But they do include a nice hack
    to solve this.
  • 41:12 - 41:15
    They are called SerDes.
    They basically turn parallel data
  • 41:15 - 41:19
    into serial data.
  • 41:19 - 41:22
    This is what the boxes look like.
  • 41:22 - 41:24
    You give them your TDMS parallel data
  • 41:24 - 41:28
    and they convert it to
    high speed serial data for you.
  • 41:28 - 41:33
    They are a little bit fiddly to use
    and your best option is to go and find
  • 41:33 - 41:37
    a person who has already configured this
    for your FPGA
  • 41:37 - 41:40
    and follow what they do.
  • 41:40 - 41:44
    “Hamster” – Mike “Hamster” Field – has
    a really good documentation on
  • 41:44 - 41:50
    how to use these in a Spartan6.
    These are also unique to your FPGA,
  • 41:50 - 41:54
    so different FPGAs are going to have
    different control schemes.
  • 41:54 - 41:57
    But if you are using a Spartan6
  • 41:57 - 42:02
    then go and look up what
    Mike “Hamster” Field is
  • 42:02 - 42:08
    doing for configuring these.
  • 42:08 - 42:14
    Remember how I said,
    our system has a serial console.
  • 42:14 - 42:19
    Because we have that system
    we can actually delve quite deep
  • 42:19 - 42:23
    into what’s happening
    internally in the system.
  • 42:23 - 42:25
    sip from bottle
  • 42:25 - 42:33
    And print it out.
    This is debugging from one of our systems.
  • 42:33 - 42:35
    You can see…
  • 42:35 - 42:41
    The first thing is the phase relationship
    between each of the channels.
  • 42:41 - 42:45
    The next one is whether
    we’re getting valid data
  • 42:45 - 42:50
    on each of the channels and then
    we’ve got the error rate for that channel,
  • 42:50 - 42:54
    whether all channels synchronized,
    and then some resolution information.
  • 42:54 - 43:00
    You can see that this has got
    a 74 MHz Pixel Clock.
  • 43:00 - 43:05
    There are three columns because
    there is Red, Green and Blue channels.
  • 43:05 - 43:09
    This give us some very interesting
    debugging capabilities.
  • 43:09 - 43:13
    If you plug in a cable
    and you’re getting errors
  • 43:13 - 43:17
    on the Blue channel and nowhere else
  • 43:17 - 43:22
    it’s highly likely there’s
    something wrong with that cable.
  • 43:22 - 43:26
    This is a very powerful tool
    that allows us to figure out
  • 43:26 - 43:30
    what’s going wrong in a system.
  • 43:30 - 43:35
    It’s something you can’t really get
    with the commercial versions of this.
  • 43:35 - 43:39
    But what about errors?
    Everything I’m talking about now
  • 43:39 - 43:42
    is a little bit experimental,
    we haven’t actually implemented this.
  • 43:42 - 43:46
    But it’s some ideas about
    what we can do because we now
  • 43:46 - 43:50
    have complete control of our decoder.
  • 43:50 - 43:54
    As I said, there’s 124 possible choices
    for 10 bit symbols,
  • 43:54 - 43:58
    of which 460 are valid Pixel symbols,
  • 43:58 - 44:02
    4 are valid Control symbols
    and 560 symbols
  • 44:02 - 44:05
    should never ever be seen no matter what.
  • 44:05 - 44:12
    That’s like 56% of our space
    that should never be seen.
  • 44:12 - 44:16
    But it’s actually better than that!
    We know because of the running DC bias
  • 44:16 - 44:25
    that there are 256 valid Pixel symbols
  • 44:25 - 44:31
    at any one point. You can’t have the…
    if you’ve got a negative DC bias
  • 44:31 - 44:37
    you can’t have a Pixel symbol
    which continues to drive you negative.
  • 44:37 - 44:44
    Actually, 74% of our space at any time
  • 44:44 - 44:48
    is not allowed to exist.
  • 44:48 - 44:52
    This means that a huge number
    of the invalid symbols
  • 44:52 - 44:56
    are only near one other valid symbol.
  • 44:56 - 45:02
    And so we can actually correct them!
    We can go: “This symbol must have been
  • 45:02 - 45:05
    this other symbol,
    because it’s not a valid symbol,
  • 45:05 - 45:09
    it must be a bit error
    from this other symbol.”
  • 45:09 - 45:13
    So we can correct these errors.
    This is quite cool.
  • 45:13 - 45:19
    We can correct about 70% of
  • 45:19 - 45:22
    single bit flip errors in Pixel data.
  • 45:22 - 45:29
    But sadly there is some that we can’t.
  • 45:29 - 45:35
    But we can detect that we got
    a invalid Pixel data.
  • 45:35 - 45:40
    So the fact that there is an error
    is important.
  • 45:40 - 45:44
    In this case we’ve got two pixels
    that we received correctly
  • 45:44 - 45:49
    and we got a pixel that we know
    is a invalid value
  • 45:49 - 45:54
    and then two more pixels
    that we received correctly.
  • 45:54 - 45:55
    You can imagine this is a Blue channel,
  • 45:55 - 45:59
    so the first ones were not very blue.
  • 45:59 - 46:03
    Then there’s the decoded value for this is
  • 46:03 - 46:07
    very, very blue, like very light blue
    and then some other ones.
  • 46:07 - 46:10
    This looks really bad, right?
  • 46:10 - 46:15
    This was probably a whole blue block.
  • 46:15 - 46:20
    One pixel difference
    of that big, that size,
  • 46:20 - 46:24
    is probably not a valid value,
  • 46:24 - 46:26
    and so we can cover them up!
  • 46:26 - 46:30
    We can go…
    the two pixels on either side
  • 46:30 - 46:32
    and average them and fix that pixel.
  • 46:32 - 46:38
    This allow us to correct a whole bunch
    more of errors that are occurring.
  • 46:38 - 46:41
    And as we’re about to take this data
  • 46:41 - 46:46
    and run it through a JPEG encoder
  • 46:46 - 46:50
    this doesn’t actually affect
    the quality of the output
  • 46:50 - 46:53
    all that much and allows to fix
    things that would otherwise
  • 46:53 - 47:00
    be giant glaring glitches in the output.
  • 47:00 - 47:02
    That’s some interesting information about
  • 47:02 - 47:09
    how you do TDMS decoding
    and how we can fix some errors.
  • 47:09 - 47:13
    The thing is, we can do it
    even better than this
  • 47:13 - 47:16
    because it’s an open source project.
  • 47:16 - 47:20
    Maybe you have some idea
    about how we can improve
  • 47:20 - 47:23
    the SerDes performance.
    Maybe you have an idea about
  • 47:23 - 47:29
    how to do TDMS decoding on
    much lower power devices
  • 47:29 - 47:34
    than we use. It’s open source!
    You can look at the code
  • 47:34 - 47:39
    and you can improve it.
    And we would love you to do it!
  • 47:39 - 47:43
    The thing is that I have a lot of hardware
    but not much time.
  • 47:43 - 47:46
    If you have lots of time
    and not much hardware,
  • 47:46 - 47:50
    I think I can solve this problem.
  • 47:50 - 47:55
    These are links to the HDMI2USB project
  • 47:55 - 47:59
    and the TimVideos project;
    and all our code, our hardware,
  • 47:59 - 48:05
    everything is on GitHub
    under open source licenses.
  • 48:05 - 48:11
    And here is some bonus screen shots that
    I wasn’t able to fit in other locations.
  • 48:11 - 48:14
    You can see these small errors.
  • 48:14 - 48:17
    That one was kind of a big error.
  • 48:17 - 48:26
    This is what happens when
    your DDR memory is slightly broken.
  • 48:26 - 48:32
    Yeah…
    but – yeah!
  • 48:32 - 48:35
    And that is my talk!
  • 48:35 - 48:43
    applause
  • 48:43 - 48:45
    Herald: Excellent!
    Thank you very much, mithro.
  • 48:45 - 48:49
    As you’ve noticed, we have a couple of
    microphones standing around in the room.
  • 48:49 - 48:53
    If you have any questions for mithro
    please line up behind the microphones
  • 48:53 - 48:58
    and I will allow you to ask the questions.
    We have question from the Internet!?
  • 48:58 - 49:02
    Signal Angel: Yes, thank you!
    Do you know if normal monitors
  • 49:02 - 49:05
    do similar error recovery or hiding?
  • 49:05 - 49:09
    Tim: I know of no commercial
    implementation that does
  • 49:09 - 49:13
    any type of error correction.
    The solution for the commercial guys
  • 49:13 - 49:20
    is to effectively never get errors.
  • 49:20 - 49:24
    They can do that because
  • 49:24 - 49:27
    they don’t have to deal with
    the angry speakers on the ground
  • 49:27 - 49:31
    going wild as my slides look weird.
  • 49:31 - 49:35
    And, as well, they are probably working
    with better quality hardware
  • 49:35 - 49:39
    than we are using. We’re trying
    to make things as cheap as possible.
  • 49:39 - 49:44
    And so we are pushing the boundaries
    of a lot of the devices we are using.
  • 49:44 - 49:48
    So we are more likely to get
    errors than they are.
  • 49:48 - 49:52
    Herald: We have quite a lot of questions.
    Remember – questions, not comments!
  • 49:52 - 49:56
    Microphone number 1, please!
  • 49:56 - 50:11
    rustling sound from audience
    coughing
  • 50:11 - 50:13
    Tim: Yes!
  • 50:13 - 50:18
    unrecognizable question from audience
  • 50:18 - 50:21
    Sorry, I don’t quite understand
    what’s going on! chuckles
  • 50:21 - 50:27
    Herald: Do we have a translation?
  • 50:27 - 50:30
    Voice from audience: Audio Angel!
  • 50:30 - 50:34
    Tim: Audio problem?
  • 50:34 - 50:45
    Herald speaks to person
    in front of stage in German
  • 50:45 - 50:52
    Tim: I’ll be around afterwards,
    If you want to chat to me, ahm…
  • 50:52 - 50:57
    Herald: And we might do that… ah…
    write you on the computer afterwards.
  • 50:57 - 51:02
    Second question from
    microphone number 3, please!
  • 51:02 - 51:06
    Question: Hello? Ah, yes. Can you
    determine the quality of a HDMI cable,
  • 51:06 - 51:09
    e.g. by measuring bit error rate
    of each three pairs
  • 51:09 - 51:11
    and also some jitter on the clock,
    and that kind of…?
  • 51:11 - 51:14
    Tim: Yes we can!
  • 51:14 - 51:18
    The quality of a HDMI cable should be
    they’re zero bit errors.
  • 51:18 - 51:24
    So anything that has non-zero bit errors
    we chop up and throw away.
  • 51:24 - 51:28
    This gets interesting
    when you have very long cables.
  • 51:28 - 51:31
    We can actually see that
    the longer the cable is
  • 51:31 - 51:37
    the harder for them
    to keep zero bit errors.
  • 51:37 - 51:43
    So yes, we can kind of judge
    the quality of the cable.
  • 51:43 - 51:48
    But it’s also hard because
  • 51:48 - 51:51
    it depends on what the sender is doing.
  • 51:51 - 51:55
    If the sender is of a lower quality
  • 51:55 - 51:58
    and the cable is low quality
    you might get bit errors.
  • 51:58 - 52:00
    But if the sender is of a high quality
  • 52:00 - 52:07
    and the cable is of a lower quality
  • 52:07 - 52:11
    they might cancel each other out
    and still be fine.
  • 52:11 - 52:16
    We can’t just go: “This is a good cable”
  • 52:16 - 52:21
    because we don’t actually have
    any control over our…
  • 52:21 - 52:26
    how powerful our sender is on this device.
  • 52:26 - 52:28
    If we could kind of turn down the sender
  • 52:28 - 52:31
    and see where things start going wrong
    that would be pretty cool.
  • 52:31 - 52:35
    If anybody wants to
    look at building such a device
  • 52:35 - 52:38
    I’d love to help you do that.
  • 52:38 - 52:41
    Herald: We have another question
    from microphone number 5.
  • 52:41 - 52:45
    Question: Your hardware,
    the HDMI2USB hardware…
  • 52:45 - 52:48
    Tim: Yes!
    Question: Is it available for simply ordering
  • 52:48 - 52:52
    or has it to be solder soldered by hand,
    or…
  • 52:52 - 52:56
    Tim: You can not solder this board by
    hand unless you are much, much better
  • 52:56 - 53:02
    than I am. It uses Ball Grid Array parts
    because it’s an FPGA.
  • 53:02 - 53:05
    This is one here.
    You can buy them.
  • 53:05 - 53:10
    We’re working with a manufacturer in India
    who builds them for us.
  • 53:10 - 53:15
    We work with them,
    and it was pretty awesome.
  • 53:15 - 53:18
    We’re also working on new hardware.
    I’ve got a whole bunch
  • 53:18 - 53:22
    of FPGA hardware here
    that you can come and have a look at
  • 53:22 - 53:26
    and I’ll probably move it out
    into the hallway afterwards.
  • 53:26 - 53:30
    Again, if you’re interested
    in the hardware and you have a use case,
  • 53:30 - 53:35
    chat to me!
    Because I like to solve problems
  • 53:35 - 53:39
    of people who are not having hardware
    and my employer pays me too much.
  • 53:39 - 53:43
    So I get to use my discretion refunds
  • 53:43 - 53:48
    for helping out people
    doing open source stuff.
  • 53:48 - 53:54
    applause
  • 53:54 - 53:59
    Herald: We have at least four more
    questions. Microphone number 2, please!
  • 53:59 - 54:05
    Question: Do you think it would be
    possible to get an 1080p image
  • 54:05 - 54:10
    out of the open source
    hardware board you use?
  • 54:10 - 54:17
    Tim: Yes, I do, but it requires
    us to do some hard work
  • 54:17 - 54:19
    that we haven’t had time to do yet.
  • 54:19 - 54:27
    And for us 720p at 60 frames
    per second is good enough.
  • 54:27 - 54:32
    The USB connection
  • 54:32 - 54:36
    is limited in bandwidth because
    we don’t have an H.264 encoder,
  • 54:36 - 54:43
    we only have MJPEG. If somebody wants
    to write us a open source, say, WebM
  • 54:43 - 54:47
    rather than H.264 encoder
  • 54:47 - 54:51
    that might start become more interesting.
    We also have ethernet, Gigabit ethernet,
  • 54:51 - 54:56
    on this board. It should be pretty ease
    to stream the data out the ethernet.
  • 54:56 - 54:59
    I, again, need help.
  • 54:59 - 55:02
    The ethernet controller works.
    We can telnet into the board
  • 55:02 - 55:07
    and control it via Telnet.
    We just need somebody to
  • 55:07 - 55:11
    actually connect the data,
    the high speed data side up.
  • 55:11 - 55:15
    We use it for debugging and stuff.
  • 55:15 - 55:19
    Mike “Hamster” Field again,
    really big thank you to him,
  • 55:19 - 55:22
    he is an amazing designer,
  • 55:22 - 55:29
    he built 1080p60 that is
    a little bit out-of-spec
  • 55:29 - 55:33
    but actually works really well on hardware
  • 55:33 - 55:39
    that is almost identical to ours.
    He also did the DisplayPort,
  • 55:39 - 55:43
    like a 4k-DisplayPort which
    we can do on our board.
  • 55:43 - 55:50
    If you only need one or two
    of the 1080p things
  • 55:50 - 55:53
    DisplayPort connectors can be
    converted to HDMI quite easily
  • 55:53 - 55:56
    and you can do that on them.
  • 55:56 - 55:59
    Yes, I think that’s possible,
    but again:
  • 55:59 - 56:06
    open source … hobbyist …
    need developers …
  • 56:06 - 56:08
    Herald: We’ll take one question
    from the internet!
  • 56:08 - 56:13
    Signal Angel: Thank you. Have you
    considered JPEG2000?
  • 56:13 - 56:18
    Tim: No, I’ve not. And the main reason
    is that I want to be a webcam.
  • 56:18 - 56:26
    I want to pretend to be a webcam. The UVC
    standard, which is the USB webcam standard,
  • 56:26 - 56:31
    does not support JPEG2000.
  • 56:31 - 56:34
    There’s no reason we couldn’t support
    JPEG2000 when connected to Linux,
  • 56:34 - 56:39
    like we could fix the Linux driver
    to add JPEG2000 support.
  • 56:39 - 56:45
    Again, I don’t know if there’s any good
    open source FPGA implementations
  • 56:45 - 56:53
    of JPEG2000? So, that’s also a blocker.
  • 56:53 - 56:58
    But if you’re interested in helping out
    – come and talk to me.
  • 56:58 - 57:02
    As I said, I would very much love
  • 57:02 - 57:07
    to chat to you and solve
    the problems you’re having
  • 57:07 - 57:12
    with getting-going.
    As well, we have t-shirts.
  • 57:12 - 57:16
    I’m wearing a t-shirt that we have, and
    I will send everybody who contributes
  • 57:16 - 57:24
    a t-shirt. Whether that’s fixing our
    website, helping on documentation,
  • 57:24 - 57:28
    helping people on IRC
    getting setup, anything.
  • 57:28 - 57:34
    You don’t need to be an expert
    on FPGA stuff to help out.
  • 57:34 - 57:38
    And we also are working on
    a little project to run MicroPython
  • 57:38 - 57:43
    on FPGAs. So if you’re really into Python
    and you like MicroPython
  • 57:43 - 57:48
    I would love to help you help us do that.
  • 57:48 - 57:53
    It’s kind of working. We just need more
    powerful (?) support. So.
  • 57:53 - 57:56
    Herald: We have two more questions from
    microphone number 1.
  • 57:56 - 57:59
    Question: So, is there some sort of
    dedicated processor on that board,
  • 57:59 - 58:02
    or do you use like a Microblaze
    in the FPGA?
  • 58:02 - 58:07
    Tim: We use an open source soft core.
    One of three.
  • 58:07 - 58:12
    We can change which soft core
    we’re using with a command line flag.
  • 58:12 - 58:16
    We’re using either the LatticeMico32
  • 58:16 - 58:20
    which is produced
    by Lattice Semiconductor.
  • 58:20 - 58:25
    We can use the OpenRISC-1000
  • 58:25 - 58:29
    or we can use a RISC-V processor.
  • 58:29 - 58:35
    We generally default to LM32
    because it’s the most performance
  • 58:35 - 58:40
    for least FPGA resource trade-off.
  • 58:40 - 58:46
    But if you like RISC-V
    or OpenRISC-1000 better
  • 58:46 - 58:52
    for some reason, say, you want
    to run Linux on our soft core,
  • 58:52 - 58:58
    then you can do that. With a one line
    command line change, yeah!
  • 58:58 - 59:04
    We’re looking at adding
    J-Core support in early next year.
  • 59:04 - 59:08
    J-Core is quite big, though,
    compared to LM32. So,
  • 59:08 - 59:12
    it probably won’t fit on some
    of the very small devices.
  • 59:12 - 59:14
    Question: So it’s a Lattice FPGA?
  • 59:14 - 59:21
    Tim: It’s a Spartan6 FPGA. And our new
    boards will probably be Artix-7
  • 59:21 - 59:24
    But we’re still in the process
    of making them exist yet.
  • 59:24 - 59:26
    Question: Thanks.
    Tim: I’ve also been working with
  • 59:26 - 59:30
    bunnie’s NeTV2, porting
    our firmware to that,
  • 59:30 - 59:33
    which has been really awesome.
    He’s doing some cool work there,
  • 59:33 - 59:40
    and he’s kind of inspired this whole
    development by showing that,
  • 59:40 - 59:43
    yes, you could do this,
    and you shouldn’t be scared of it.
  • 59:43 - 59:46
    Herald: Good, one more question
    from microphone number 1.
  • 59:46 - 59:54
    Question: Yes. Do you have any plans for
    incorporating HD-SDI into your platform?
  • 59:54 - 59:59
    Tim: Yes and no!
    We have plans and ideas
  • 59:59 - 60:02
    that we could do it
  • 60:02 - 60:07
    but HD-SDI
  • 60:07 - 60:13
    and all of the SDI protocols are
    much harder for the consumer,
  • 60:13 - 60:17
    generally to access, and we want
    to drive the costs of this down
  • 60:17 - 60:22
    to as low as it can go. And…
  • 60:22 - 60:26
    HDMI is a consumer electronic thing.
    You get it on everything.
  • 60:26 - 60:30
    You get it on your
    like five-buck Raspberry Pi.
  • 60:30 - 60:35
    HDMI is probably
    a really good solution for this.
  • 60:35 - 60:38
    We haven’t developed any SDI cores
    or anything like that,
  • 60:38 - 60:43
    so I can’t tell you like
    that we’re doing anything there
  • 60:43 - 60:49
    but if somebody’s interested, again,
    I like to remove roll (?) blocks and
  • 60:49 - 60:53
    we would love to have people work on that.
  • 60:53 - 60:56
    Herald: We have one more question from
    the internet and we have two minutes left.
  • 60:56 - 61:02
    Signal Angel: OK, thank you. The question
    is not related to HDMI but to FPGAs.
  • 61:02 - 61:06
    FPGAs are programmed in a high level
    language like VERILOG or…
  • 61:06 - 61:11
    after simulation you compile. So every
    vendor has created his own compiler
  • 61:11 - 61:16
    for its own hardware. Are you aware of
    a move to open source compilers
  • 61:16 - 61:21
    or to independent hardware? And do you see
    a benefit in open source FPGA compilers?
  • 61:21 - 61:27
    Tim: Yes! If anybody knows
  • 61:27 - 61:31
    about FPGAs you know
    they use proprietary compilers.
  • 61:31 - 61:36
    And these proprietary compilers
    are terrible.
  • 61:36 - 61:40
    I’m a software engineer.
    If I find a bug in gcc
  • 61:40 - 61:44
    I can fix the bug. I’ve got those skills,
    and I can move forward or at least
  • 61:44 - 61:47
    figure out why the hell the bug occurred.
  • 61:47 - 61:52
    That is not the case with FPGA compilers.
    The FPGA compiler we use
  • 61:52 - 61:57
    is non-deterministic. You can give it
    the same source code and it produces
  • 61:57 - 62:02
    different output. I’d love somebody
    to reverse-engineer why that occurs
  • 62:02 - 62:07
    because I’ve removed all the randomness
    from random sources from it
  • 62:07 - 62:12
    and it still manages to do it!
    I’m really impressed. So,
  • 62:12 - 62:17
    Clifford has done an open source
    FPGA tool chain
  • 62:17 - 62:22
    for the Lattice iCEstick things.
  • 62:22 - 62:29
    He said he’s gonna work
    on the Actrix7 FPGAs.
  • 62:29 - 62:34
    Please donate to him and help him.
    I would… like…
  • 62:34 - 62:38
    if that exists I owe people who…
    like a bazillion of beers, because
  • 62:38 - 62:43
    the sooner I can get off proprietary
    tool chains the happier I will be,
  • 62:43 - 62:49
    and it will make my hobby so much nicer.
    So, please help him!
  • 62:49 - 62:51
    Herald: And do give Tim
    a big round of applause!
  • 62:51 - 62:55
    applause
  • 62:55 - 62:58
    postroll music
  • 62:58 - 63:19
    subtitles created by c3subtitles.de
    in the year 2017. Join, and help us!
Title:
Dissecting HDMI (33c3)
Description:

more » « less
Video Language:
English
Duration:
01:03:19

English subtitles

Revisions