-
33C3 preroll music
-
Herald-Angel: Tim ‘mithro’ Ansell
has come all the way from Australia
-
to talk to us about ‘Dissecting HDMI’
and developing an open FPGA-based
-
capture hardware for sharing
talks outside of the room.
-
And he will be explaining how
to dissect it. And I’m looking forward
-
to hearing the talk in a second.
And so please give Tim
-
a warm round of applause
again. Thank you!
-
applause
-
Tim Ansell: Okay, hi, I’m Tim, and
in theory, if my slides change
-
you would see that.
-
And I kind of have too many projects.
And I’m gonna be discussing one of them.
-
This is another project that
I gave a lightning talk on earlier.
-
If you didn’t see it, it’s an ARM
microcontroller that goes in your USB port.
-
People wanted to know
when to hack on it.
-
Tomorrow at 2 PM, apparently.
-
So, the first that I want to say is
I’m a software developer.
-
I’m not a hardware designer,
I’m not an FPGA developer,
-
I’m not a professional in any of that.
-
I develop software for full time.
So this is my hobby.
-
As well, this information comes from
a couple of projects that I started
-
but a lot of other people did the majority
of the work and I’m just telling you
-
about it because they are too shy
to come up and talk about it themselves.
-
So a big thank you to all these people
who have helped me in various ways
-
regarding this. And theses slides –
-
any of the blue things are links, so
if your playing it along at home
-
you can get to them by that URL
and click on these things.
-
And there is probably other people I’ve
forgotten, who are not on this list.
-
I’m very sorry.
-
So this title of this talk could be called
“Software guy tries hardware and complains”.
-
I’ve had a really hard time figuring out
what to call this talk.
-
And you’ll see some other attempts
of naming this talk better.
-
So a bit of history.
How do I end up doing HDMI stuff?
-
So TimVideos is a group of projects
which are trying to make it easy
-
to record and live-stream user groups
and conferences like this event.
-
However, we want to do it without
needing the awesome team
-
that is doing the recording here.
-
These guys are really, really organized
and professional.
-
We want to do it where people have
no experience at all with AV,
-
and just can make it happen.
-
And so this is how you record
a conference or user group.
-
I’m gonna be talking about
these two things here,
-
the HDMI2USB devices that we created.
-
They are used in our setup both for
camera capture and for capture of slides.
-
And so HDMI2USB is FOSS hardware
for doing HDMI capture
-
and actually has a bit of history
with the CCC
-
because it was inspired by
a speaker who spoke here.
-
Bunnie spoke on his NeTV board
-
which was an FPGA Man-in-the-Middle attack
-
on HDCP-secured links. His talk
is really awesome. It’s gonna be…
-
that talk is way more technical
than mine and gives you
-
some really awesome details about the
cool things he did to make that work.
-
Mine is much more basic. You don’t
need much experience with HDMI
-
to follow my talk.
Our device works like his does,
-
except his was deliberately designed
to not allow capture,
-
our design allows capture.
It effectively man-in-the-middles
-
the presenters projector,
between the presenters laptop
-
and the projector, and provides
a high-quality capture out the USB2 port.
-
I use an FPGA to do that.
This is because
-
using FPGA makes hardware problems
software problems, and as I said
-
I’m a software developer, I prefer
software problems to hardware problems.
-
And the way it kind of works is
it appears as a UVC webcam so that
-
you can use it with Skype or Hangouts
or any of those things without needing
-
any drivers on sensible operating
systems like MacOS and Linux.
-
On Windows you need a driver that
tells it to use the internal driver.
-
It’s kind of weird.
And also a serial port
-
because we have the ability to switch
which input goes to which output.
-
It’s kind of like a matrix.
-
And so this the open source
hardware we designed.
-
It’s in KiCAD, you can find it on Github.
-
I’m quite proud of it.
It’s quite a good little kit.
-
We don’t use all the features of it yet,
but it’s pretty awesome.
-
And it’s in use!
-
We used this technology to capture
at a bunch of conferences:
-
PyCon in Australia,
linux.conf.au in Australia
-
– as I said, I’m Australian.
-
DebConf, though, are not Australian.
-
They used it in – sorry –
in South Africa, I think.
-
And there are a whole bunch of
other people around the world
-
who are using this,
which is pretty awesome.
-
The main reason I wanted it to be
open source was so that
-
other people could use them
and learn from them and fix problems,
-
because there are a lot of problems
we’ve run into.
-
And the other thing is
this is all full of Python.
-
We do use Python
-
to create the firmware for the FPGA,
and all these other areas.
-
If you wanna find out more about that
go to my talk at Python AU
-
which was recorded with the very device
I’m talking about.
-
microphone noise
Oops, sorry!
-
But as I said, this is going
to include a lot of problems.
-
The first one is:
People still use VGA!
-
This kind of makes me sad.
-
Because VGA is not HDMI.
It was invented in 1987
-
and it’s an analog signal.
-
Well, HDMI shares some history
with VGA.
-
You can’t use the same techniques
for capturing HDMI that you can for VGA.
-
So why do you still use it?
It’s old and bad!
-
We developed a VGA expansion board
to effectively allow us to capture
-
VGA using the same thing.
-
By ‘developed’ I mean
we have designed, and some exist
-
but nobody’s actually finished the
firmware to make them work yet.
-
So, I’d love help there.
-
There is also another problem:
-
I want to do this
all open source, as I said.
-
The HDMI ecosystem has
commercial cores you can buy
-
and they work reasonably well,
but you have to buy them
-
and you don’t get the source code to them
or if you do get the source code to them
-
you can’t share them with other people.
-
As well, I wanted to be open source
because we wanted to solve
-
all those problems that people
have when plugging in their laptop
-
and it not working.
-
And the commercial cores aren’t designed
to allow us to give the ability
-
to do that –
solve those problems permanently.
-
So we create a new implementation!
-
As anybody who has ever done a
reimplementation or a new implementation
-
or something, it means that you got
new bugs which I all describe quite a bit.
-
So this talk could be called
-
‘Debugging HDMI’ rather than
‘Dissecting HDMI’ because it includes
-
a lot of information about
how things went wrong.
-
Ok, that’s kind of the introduction of why
we are here and why I’m talking about this.
-
So how does HDMI work?
-
Well, HDMI is actually reasonably old now.
-
It was created in 2002.
-
It’s based on the DVI specification.
DVI was created in 1999.
-
So DVI is 17 years old.
-
And DVI was designed to replace VGA
and shares a lot of similar history.
-
HDMI is backwards compatible with DVI
electrically and protocol wise,
-
but uses a different connector.
-
This is a HDMI connector.
You’ve probably seen them all before.
-
If you look closely you see that
there are 19 pins on the HDMI connector.
-
That’s Pin 1.
-
So what do all this pins do?
-
There are five pins
which are used for Ground.
-
There is one pin which is used for Power,
it gives you 5 volts at 50 milliamps.
-
This isn’t much.
You can’t do much with 50 milliamps
-
except maybe some type of adapter,
converter or power a whole microcontroller.
-
Some Chinese devices try
to draw like an amp from this.
-
That’s not very good. So that’s another
thing you should watch out for.
-
There are three high speed data pairs
which transmit the actual video data.
-
And they share a clock pair.
So that’s these pins here.
-
And then there are five pins
which are used for low speed data.
-
So that’s all the pins
on the HDMI connector.
-
You might have noticed that there was
a whole bunch of different things
-
I said there. And you need to actually
understand a whole bunch
-
of different protocols
to understand how HDMI works.
-
There is bunch of low speed ones
and there is a bunch of high speed ones.
-
I’m not gonna talk
about all of those protocols
-
because there is just too many
to go into an hour talk.
-
The low speed protocol
I’m not gonna talk about is
-
CEC or Audio Return;
and I’m not gonna talk about any of the
-
Auxiliary Data protocol
that is high speed, or HDCP.
-
If you want HDCP go
and look at bunnie’s talk.
-
It’s much better than mine.
But… or ethernet!
-
What I will be talking about is
the EDID and DDC protocols,
-
The 8b/10b encoding of the pixel data
-
and the 2b/10b encoding
of the control data.
-
Interesting enough this is actually DVI.
-
I’m not telling you about HDMI, I’m
really describing to you how DVI works.
-
Again, many titles.
-
Starting with the low speed protocol:
-
EDID or DDC.
-
I’m gonna use these two terms
interchangeably,
-
they’ve been so confused now
that they are interchangeable,
-
in my opinion.
-
This is something they inherited from VGA.
-
It was invented and added
to VGA in August 1994.
-
It was for plug and play of monitors
so that you could plug in your monitor
-
and your graphics card would just work
rather than requiring you to tell
-
your graphics card exactly what resolution
and stuff your monitor worked at.
-
It uses I2C [I squared C]
and a small EEPROM.
-
These are the pins that it uses.
-
15 is the Clock pin and
16 is the Data pin,
-
and then it uses the Ground, and the
5 Volts is used to power that EEPROM.
-
And in some ways it also uses the 19
because 19 is how you detect
-
that there is something there
to read from.
-
It uses I2C.
-
I2C is a low speed protocol that runs
at either 100 kHz or 400 kHz.
-
Technically EDID is not I2C,
but it is.
-
It only supports the 100 kHz version,
though, in theory,
-
everything on this planet
can be read at 400 kHz.
-
It is also very well explained elsewhere,
so I’m not going to explain in detail
-
what I2C is or does,
or how to implement it.
-
The EEPROM is a 24 series.
It’s found at I2C address 50.
-
It’s 8 bits in size which gives
you 256 bytes of data.
-
Again, this EEPROM and how to talk to it
is very well described on internet.
-
So I’m not gonna to describe it here.
If you’ve used EEPROMs
-
over I2C it’s likely
you’ve used a 24 series EEPROM.
-
Probably bigger ones,
256 bytes is pretty small.
-
So like a 16 bits one,
-
but EDID supports only
the 8 bits ones.
-
The kind of interesting part of EDID
is the data structure:
-
it’s a custom binary format that describes
-
what the contents of the EEPROM is.
-
Again, Wikipedia has a really
good description of this.
-
So I’m not gonna go
into much detail.
-
But the important thing is that it
describes resolution, frequency
-
and format for talking to the monitor.
-
This is really important because
if you try and send
-
the wrong resolution, frequency or format
the monitor is not gonna understand it.
-
This is kind of what EDID is used for.
-
sipping, sound of water bottle
-
So this is where things
start getting a bit hairy.
-
Presenters come up to the front and
the first question you see anybody ask is:
-
What resolution do I use?
-
And they get a panel like this which
has a bazillion of resolutions selected.
-
And the thing is, despite your monitor
saying that it supports
-
many formats they lie.
-
It turns out that projectors lie
a lot more than normal displays.
-
I don’t know why they are special.
-
So this is what a supported format
looks like.
-
It’s really great.
-
As well, I care about
capturing the data.
-
And so I want the things
in the format that is
-
easy for me to capture.
-
I also don’t to be scaling
peoples images and text
-
because scaling looks really bad.
So if someone selects
-
like a really low resolution and
we scale it up it looks really horrible.
-
It makes text unreadable; and
presenters are very denounced,
-
especially at technical conferences,
for using tiny, tiny fonts.
-
And so we need to use as much
resolution as we can.
-
How we solve this is we emulate
our own EEPROM in the FPGA
-
and ignore what the projector
tells us it can do.
-
We tell the presenter that this is
what we support.
-
You might notice that it kind of
solves the problem
-
of what resolution we do.
-
Offer a single solution…
-
offer a single option makes it
very hard to choose the wrong one.
-
That’s good! We solved the problem!
-
No, we haven’t solved the problem.
-
We were recording PyCon AU
and we found that
-
some Mac laptops were
refusing to work.
-
To understand the cause of this
you need to understand
-
a little bit about how the world works.
-
There are two major frequencies
in the world: 50 Hz and 60 Hz.
-
50 Hz is mainly used
in the “Rest of the World”
-
and 60 Hz is used in America
and Japan and a few other places
-
but that’s kind of a very rough thing.
-
Laptop sold in Australia,
Australia is 50 Hz.
-
It’s part of the “Rest of the World”.
You’d think the that the laptop
-
could do 50 Hz. Plus everything
is global these days, right?
-
I can plug in my power pack
for my laptop in the US or Australia,
-
like, it should work everywhere right!
-
No. Sad!
-
We solved it by claiming
that we were American
-
and supporting 60 frames per second
rather than 50 frames per second.
-
So I guess a display
with an American accent.
-
We deployed this hotfix
on the Friday evening.
-
And on Saturday all the problems
that we were having on Friday
-
went away. So this is kind of
the power of a open source solution
-
and having complete control
over your hardware.
-
Nowadays we actually offer both 60 and 50
-
because for display capture
if you’re displaying stuff
-
at 50 frames per second you’re
probably speaking a lot faster than I am.
-
It’s really weird, these
128 bytes are really hard
-
and the number one cause
of why a persons laptop
-
can’t talk to the projector.
-
It gets a trophy!
-
To try and figure out why that is
we created EDID.tv.
-
It’s supposed to be
a repository of EDID data.
-
There is a Summer of Code project,
Python/Django/Bootstrap
-
and an EDID grabber tool that
you can run on your laptop.
-
I’d love help making this work better.
-
Hasn’t had much love since
the Summer of Code student made that work.
-
But it would be really nice to have an
open database of everybody’s EDID data
-
out there. There are a bunch
of closed ones. I can pay to buy one,
-
but I’d really love to have an open one.
-
As well maybe we don’t need
the whole capture solution,
-
maybe you can just override the EDID.
-
The C3VOC here actually developed
a version that overrides EDID for VGA.
-
I have a design which works for HDMI.
-
It just uses a low cost microprocessor
to pretend to be an EEPROM.
-
As well, DisplayPort is not HDMI.
Don’t get these two confused,
-
they are very, very different protocols.
-
They have an Auxiliary Channel
like EDID and CEC.
-
I have boards to decode them
here at CCC.
-
So if you’re interested in that
come and talk to me
-
because we would really like to do
similar things for DisplayPort.
-
That is the slow speed data.
-
Sip from bottle
-
What about the high speed data?
-
Each pixel on your screen is
-
basically three colors
in DVI standard: Red, Green, Blue.
-
And each one is a byte in size.
-
Each of the colors is mapped to
a channel on the HDMI connector.
-
You can kind of see the Red and
the Green and the Blue channels.
-
Each channel is differential pair.
-
You get a Plus and a Negative
and a Shield.
-
And they use twisted pair to try
and reduce the noise reception of these,
-
because these are quite high speed.
-
And they have a dedicated Shield to
try and – again – reduce the noise
-
that is captured.
-
This is kind of where it gets to
the ‘differential signaling’ part
-
of the ‘TMDS’ that is
the kind of code name
-
for the internal protocol that is used
on the high speed data.
-
They also…
all these channels share a Clock.
-
That clock is called the Pixel Clock.
-
But each of these channels
is a serial channel.
-
It transmits data at 10 bits.
-
They… every 10 bits – sorry,
-
every clock cycle there are 10 bits of data
transmitted on each of these channels.
-
There is a shared clock and
each of the channels is running
-
at effectively
ten times that shared clock.
-
This is kind of what
the whole system looks like.
-
You have your Red, Green, Blue channels.
-
You take your 8 bits of input data
on each channel
-
and you convert it to the 10 bits
-
that we’re going to transmit,
and it goes across the cable
-
and then we decode it on the other side.
-
The question is: what does
the 8 bit to 10 bit encoding
-
look like and how do you understand that.
-
It is described by this diagram here.
It’s a bit small so I’ll bring it up.
-
This is what it looks like.
Yes… sure…
-
…what? This diagram – like –
-
I’ve spent hours looking at this,
and it is an extremely hard diagram
-
to decode.
It’s very, very hard to understand.
-
And it turns out the encoding
protocol is actually quite easy!
-
It’s three easy steps – approximately.
-
So I’m going to show you all how
to write an encoder or a decoder.
-
That diagram is just for the encoder.
-
They have a similar diagram that
is not the inverse of this for decoding.
-
Again, almost impossible to read.
-
The three steps: First we’re
going to do ‘Control’ or ‘Pixel’,
-
choose which one to do. Then we’re
going to either encode Control data
-
or encode Pixel data.
-
A couple of important points
to go through first:
-
The Input data
– no matter how wide it is –
-
is converted to 10 bit symbols.
-
Data goes to symbols.
When we’re talking about them
-
being transmitted we talk about them
in symbols, when it’s decoded into pixels
-
we talk about them in data.
-
As well, things need
to be kept DC-balanced.
-
I’ve rushed ahead.
-
The question is: “Why 10 bits?”
Our pixels were 8 bits.
-
I will explain why
in the Pixel Data section.
-
But it’s important that all our symbols
are the same size.
-
We’re always transmitting 10 bits
every clock cycle.
-
Keeping DC-balanced:
-
long runs of 1s and 0s are bad.
-
There are lots of reasons for this.
-
I tend to think of it like
HDMI isn’t AC coupled
-
but you can kind of think of it
like AC coupled.
-
It’s not to recover Clock.
-
We have a clock pair that is used
to give our Clock signal.
-
There are lots of lies on internet
that say that the reason
-
we’re going to keep DC balance
is because of Clock.
-
But no, that’s not the case.
-
So what does DC balance mean?
-
A symbol which has lots of 1s
or lots of 0s
-
is going to be considered DC-biased
-
if it has more 1s than 0s.
-
This is kind of what it’s like:
this symbol here
-
has lots of 1s and if you add up
all the 1s you can see it’s got
-
quite a positive bias.
If it was inverse and had lots of 0s
-
it would have a negative DC bias.
-
That cause… that DC bias over time
causes us problems.
-
That are the two important things we have
to keep in mind when looking at the rest.
-
sound of bottle sip
-
The first thing we need to figure out is
are we transmitting Control data
-
or Pixel data.
-
Turns out that what is happening
in your display is,
-
we are transmitting something
that’s actually bigger
-
than what you
see on your screen.
-
This not the scale. The Control data
periods are much, much smaller.
-
The Control data is in orange
and the Pixel data is in purple-pink.
-
So why does this exist?
It exists because of old CRT monitors.
-
And for those in the audience
who where kind of born after CRT monitors,
-
this is what they look like.
-
The way they work is,
they have an electron beam
-
that scans across,
highlighting the phosphorus.
-
This electron beam can’t just be…
get back to other side of the screen
-
straight away, or get back to the top of
the screen. And so these periods
-
where we are transmitting Control data
was to allow the electron beam
-
to get back to the location
where it needed to start
-
transmitting the next set of data.
-
That’s why it exists.
Why do we care?
-
Because the encoding schemes
for Control and Pixel data
-
are actually quite different.
-
This is the main difference.
I’m going to come back to this slide
-
a bit later. But again, the
important thing to see here is
-
that despite the encoding scheme
being quite different
-
the output is 10 bits in size.
-
That first step – choosing whether
it’s Pixel or Control data –
-
is described by this bit of the diagram.
You might notice that’s
-
not the first thing in the diagram.
-
How do you convert Control data
to Control symbols?
-
First we need to know what
Control data is. There are two bits,
-
there is the HSync and the VSync signal.
-
They provide basically
the horizontal and vertical pixel sizes.
-
They are kind of left over from VGA.
We don’t actually need them
-
in HDMI or DVI to know
where the edges are
-
because we can tell the difference
between Control and Pixel data.
-
But they kind of still exist
because of backwards compatibility.
-
This means that we have two bits of data
that we need to convert to 10 bits of data.
-
So, a 2b/10b scheme.
-
How they do it is they just hand-picked
four symbols that were going to be
-
these Control data symbols.
-
These are the four symbols. There’s
some interesting properties with them.
-
They are chosen to be DC-balanced.
They roughly have the same number
-
of 0s and 1s. So we don’t have to worry about
the DC bias with these symbols very much.
-
They are also chosen to have
seven or more transitions from 0 to 1
-
in them. This number of transitions
-
is used to understand
the phase relationship
-
of the different channels.
So if you remember this diagram,
-
we have a cable going between
the transmitter and the transceiver.
-
These are, again, very high speed signals.
-
And even if the transmitter was
transmitting everything at the same time,
-
the cable isn’t ideal and might
delay some of the symbols.
-
The bits on one channel
[might take] longer than others.
-
By having lots of these transmissions
we can actually find
-
the phase relationship between
each of the channels and then
-
recover the data. And so
that’s why these Control symbols
-
have a large number
of transitions in them.
-
More on that later when we get to the
implementation. And I’m running out’ time.
-
This part of the diagram is the
Control data encoding.
-
sip from bottle
-
What about Pixel data
and the Pixel symbols?
-
Again, in DVI each channel
of the Pixel is 8 bits.
-
And the encoding scheme is described
by the rest of the diagram.
-
But again, it’s actually
really, really simple.
-
This encoding scheme is called 8b/10b,
-
because it takes 8 bits
converting it to 10 bits.
-
However, there is a huge danger
here because IBM also invented
-
the 8b/10b scheme
that is used in everything.
-
This is used in DisplayPort, it’s used
in PCI Express, it’s used in SATA,
-
it’s used in pretty much everything
on the planet.
-
This is not the encoding TDMS uses.
-
You can lose a lot of time
trying to map this diagram
-
to the IBM coding scheme,
and going these are not the same.
-
That is because they’re not the same.
This is a totally different coding scheme.
-
Encoding Pixel data is a two-step process.
I did say it was three-ish steps
-
to do this.
The first step is we want to reduce
-
the transitions in the data.
-
How do we do this? –
Sorry, why do we do this?
-
Because this, again, is
a high speed channel.
-
We want to reduce the cross-talk
between the lanes.
-
They are actually quite close
to each other.
-
So by reducing the number
of transitions we can reduce
-
the probability that the signal propagates
-
from one channel to the next.
And how we do it?
-
We’re gonna choose one
of two encoding schemes.
-
An XOR encoding scheme
or an XNOR encoding scheme.
-
How do we do the XOR encoding scheme?
It’s actually pretty simple.
-
We set the Encoded Bit
same as the first Data Bit
-
and then the next Encoded Bit
is the first Encoded Bit
-
XORed with the Data bit.
-
And then we just repeat until
we’ve done the 8 bits.
-
So this is how we do the XOR encoding.
-
The XNOR encoding is the same process,
except instead of using XOR
-
it uses XNOR.
-
How do we choose
which one of these to use?
-
If the Input Data byte
has fewer than four 1s
-
we use the XOR. If it has more
than four 1s we use the XNOR.
-
And then there’s a tie-break (?)
if you have even.
-
The important thing here is that this
method is determined by the Data byte only.
-
There is no hidden state here
or continuous change.
-
Every pixel has a one-to-one
mapping to an encoding.
-
Then we append a bit
on the end that indicates
-
whether we chose XOR,
XNOR encoding of that data.
-
And so that converts
our 8 bits Input Pixels
-
to 9 bits of encoded data, effectively
our 8-bit encoded sequence
-
and then one bit to indicate whether
we chose XOR, or XNOR encoding
-
for that Data bit. So that’s it there.
-
This encoding is actually very good
at reducing transitions.
-
On average, we had roughly
eight transitions previously,
-
now we have roughly three-ish,
so it’s pretty cool.
-
I have no idea how they figured this out.
-
I’m assuming some very smart
mathematicians where involved
-
because discovering this is beyond me.
-
And that describes the top part
of this process.
-
sounds of scratching nose and beard
-
This is where, in the TMDS, the
Transition Minimization comes from,
-
that step there in the encoding process.
-
But there is still one more step.
-
We need to keep the channel
DC-balanced, as I explained earlier.
-
How can we do that? Because
not all pixels are guaranteed to be
-
at zero DC bias
like the Control symbols are.
-
We do it by keeping a running count
of the DC bias we have,
-
and then, if we have a positive DC bias
-
and the symbol is also
positively biased, we invert it.
-
Or, if we have a negative DC bias
and the symbol has a negative DC bias,
-
we invert it.
And the reason we do this is
-
because when we invert a symbol we
convert all the 1s to 0s which means
-
a negative DC bias
becomes a positive DC bias.
-
As I said, we chose – because we are already
negative and the thing was negative –
-
we convert it to plus. It means we’re
going to drive the running DC bias value
-
back towards zero.
We might overshoot, but the next stage
-
we’ll keep trying to oscillate up and
down, and on average over time
-
we keep a DC bias of zero.
-
And as I said. Then, to indicate
whether or not we inverted
-
or kept… the…
straight through we inverted,
-
we add another bit on the end.
So that’s how get our 10 bit
-
encoding scheme.
We have the 8 bits of encoded data,
-
then one bit indicating whether or not
it used XOR/XNOR encoding,
-
and then one bit to indicate whether
or not we inverted the symbol.
-
That describes this bottom part
of the chart.
-
Now you can see partly
why this chart is kind of confusing.
-
It’s no way in what I think
of a what’s a logical diagram.
-
This might be how you implement it
in hardware if you really understand
-
the protocol, but not a very good diagram
for explaining what’s going on. And…
-
sip from bottle
-
As you see it’s actually pretty simple.
-
In summary this is
the interesting information
-
about the two different encoding schemes.
-
Because we minimized
the transitions in the Pixel data
-
we can actually tell
Control and Pixel data apart
-
by looking at how many transitions
are in the symbol.
-
If it has six or more transitions
it must be a Control symbol.
-
If it has four or less
it must be a Pixel symbol.
-
You now know
how to encode TDMS data
-
and how to decode TDMS data
-
because if you want to decode
you just do the process backwards.
-
Congratulations!
How do you actually implement this?
-
You can just write the XOR logic
-
and a little counter
that keeps track of the DC bias
-
and all that type of thing
in FPGA.
-
I’m not going to describe that
because I don’t have much time.
-
But if you followed the process
that I have given you
-
it should be pretty easy.
-
But… and this is what we use currently.
-
You could actually use a lookup table.
What we are doing is
-
converting 8 bits of data
to 10 bits of data.
-
That is a lookup table process,
pretty easy.
-
FPGAs are really good at
doing ‘lookup table’-type processes,
-
and it also allows you then
to extend this system
-
to those other protocols
like the 4b/10b that is used
-
for the Auxiliary data.
-
So we are looking at that in the future.
It uses a few more resources
-
but it’s a lot more powerful.
-
This is kind of what your encoder
will look like, and your decoder.
-
It’s quite simple,
it takes in your 10 bits of data
-
and outputs either
your 8 bits of Pixel data
-
or your 2 bits of Control data
and the data type.
-
This is kind of what if you went
into our design and looked at it
-
at high level, in the schematic,
-
you’d probably see a block
that looks like this.
-
The encoder is slightly more complicated
because you also have the DC bias count
-
that you have to keep track of.
But, again,
-
the data goes in
and the data comes out.
-
That’s simple, right?
-
This kind of extends to Auxiliary data,
or if you get an error,
-
like if you…
There are 124 symbols
-
that you can have in 10 bits of data.
-
Not all of them are valid.
So if you get one of these invalid symbols
-
you know you have an error.
-
However, things happen quite quickly
-
when you times them by ten.
And so our Pixel Clock
-
for 640x480 is 25 MHz.
When you times that by ten
-
you get 250 MBits per channel.
When you’re doing 720p
-
you’re doing 750 MBits per channel.
-
And 1080p is at 1500 MBits per channel.
-
An FPGA is fast, but
they’re not really that fast
-
at a range that I can afford to buy.
I’m sure the military has ones
-
that go this fast, but
I’m not as rich as them.
-
But they do include a nice hack
to solve this.
-
They are called SerDes.
They basically turn parallel data
-
into serial data.
-
This is what the boxes look like.
-
You give them your TDMS parallel data
-
and they convert it to
high speed serial data for you.
-
They are a little bit fiddly to use
and your best option is to go and find
-
a person who has already configured this
for your FPGA
-
and follow what they do.
-
“Hamster” – Mike “Hamster” Field – has
a really good documentation on
-
how to use these in a Spartan6.
These are also unique to your FPGA,
-
so different FPGAs are going to have
different control schemes.
-
But if you are using a Spartan6
-
then go and look up what
Mike “Hamster” Field is
-
doing for configuring these.
-
Remember how I said,
our system has a serial console.
-
Because we have that system
we can actually delve quite deep
-
into what’s happening
internally in the system.
-
sip from bottle
-
And print it out.
This is debugging from one of our systems.
-
You can see…
-
The first thing is the phase relationship
between each of the channels.
-
The next one is whether
we’re getting valid data
-
on each of the channels and then
we’ve got the error rate for that channel,
-
whether all channels synchronized,
and then some resolution information.
-
You can see that this has got
a 74 MHz Pixel Clock.
-
There are three columns because
there is Red, Green and Blue channels.
-
This give us some very interesting
debugging capabilities.
-
If you plug in a cable
and you’re getting errors
-
on the Blue channel and nowhere else
-
it’s highly likely there’s
something wrong with that cable.
-
This is a very powerful tool
that allows us to figure out
-
what’s going wrong in a system.
-
It’s something you can’t really get
with the commercial versions of this.
-
But what about errors?
Everything I’m talking about now
-
is a little bit experimental,
we haven’t actually implemented this.
-
But it’s some ideas about
what we can do because we now
-
have complete control of our decoder.
-
As I said, there’s 124 possible choices
for 10 bit symbols,
-
of which 460 are valid Pixel symbols,
-
4 are valid Control symbols
and 560 symbols
-
should never ever be seen no matter what.
-
That’s like 56% of our space
that should never be seen.
-
But it’s actually better than that!
We know because of the running DC bias
-
that there are 256 valid Pixel symbols
-
at any one point. You can’t have the…
if you’ve got a negative DC bias
-
you can’t have a Pixel symbol
which continues to drive you negative.
-
Actually, 74% of our space at any time
-
is not allowed to exist.
-
This means that a huge number
of the invalid symbols
-
are only near one other valid symbol.
-
And so we can actually correct them!
We can go: “This symbol must have been
-
this other symbol,
because it’s not a valid symbol,
-
it must be a bit error
from this other symbol.”
-
So we can correct these errors.
This is quite cool.
-
We can correct about 70% of
-
single bit flip errors in Pixel data.
-
But sadly there is some that we can’t.
-
But we can detect that we got
a invalid Pixel data.
-
So the fact that there is an error
is important.
-
In this case we’ve got two pixels
that we received correctly
-
and we got a pixel that we know
is a invalid value
-
and then two more pixels
that we received correctly.
-
You can imagine this is a Blue channel,
-
so the first ones were not very blue.
-
Then there’s the decoded value for this is
-
very, very blue, like very light blue
and then some other ones.
-
This looks really bad, right?
-
This was probably a whole blue block.
-
One pixel difference
of that big, that size,
-
is probably not a valid value,
-
and so we can cover them up!
-
We can go…
the two pixels on either side
-
and average them and fix that pixel.
-
This allow us to correct a whole bunch
more of errors that are occurring.
-
And as we’re about to take this data
-
and run it through a JPEG encoder
-
this doesn’t actually affect
the quality of the output
-
all that much and allows to fix
things that would otherwise
-
be giant glaring glitches in the output.
-
That’s some interesting information about
-
how you do TDMS decoding
and how we can fix some errors.
-
The thing is, we can do it
even better than this
-
because it’s an open source project.
-
Maybe you have some idea
about how we can improve
-
the SerDes performance.
Maybe you have an idea about
-
how to do TDMS decoding on
much lower power devices
-
than we use. It’s open source!
You can look at the code
-
and you can improve it.
And we would love you to do it!
-
The thing is that I have a lot of hardware
but not much time.
-
If you have lots of time
and not much hardware,
-
I think I can solve this problem.
-
These are links to the HDMI2USB project
-
and the TimVideos project;
and all our code, our hardware,
-
everything is on GitHub
under open source licenses.
-
And here is some bonus screen shots that
I wasn’t able to fit in other locations.
-
You can see these small errors.
-
That one was kind of a big error.
-
This is what happens when
your DDR memory is slightly broken.
-
Yeah…
but – yeah!
-
And that is my talk!
-
applause
-
Herald: Excellent!
Thank you very much, mithro.
-
As you’ve noticed, we have a couple of
microphones standing around in the room.
-
If you have any questions for mithro
please line up behind the microphones
-
and I will allow you to ask the questions.
We have question from the Internet!?
-
Signal Angel: Yes, thank you!
Do you know if normal monitors
-
do similar error recovery or hiding?
-
Tim: I know of no commercial
implementation that does
-
any type of error correction.
The solution for the commercial guys
-
is to effectively never get errors.
-
They can do that because
-
they don’t have to deal with
the angry speakers on the ground
-
going wild as my slides look weird.
-
And, as well, they are probably working
with better quality hardware
-
than we are using. We’re trying
to make things as cheap as possible.
-
And so we are pushing the boundaries
of a lot of the devices we are using.
-
So we are more likely to get
errors than they are.
-
Herald: We have quite a lot of questions.
Remember – questions, not comments!
-
Microphone number 1, please!
-
rustling sound from audience
coughing
-
Tim: Yes!
-
unrecognizable question from audience
-
Sorry, I don’t quite understand
what’s going on! chuckles
-
Herald: Do we have a translation?
-
Voice from audience: Audio Angel!
-
Tim: Audio problem?
-
Herald speaks to person
in front of stage in German
-
Tim: I’ll be around afterwards,
If you want to chat to me, ahm…
-
Herald: And we might do that… ah…
write you on the computer afterwards.
-
Second question from
microphone number 3, please!
-
Question: Hello? Ah, yes. Can you
determine the quality of a HDMI cable,
-
e.g. by measuring bit error rate
of each three pairs
-
and also some jitter on the clock,
and that kind of…?
-
Tim: Yes we can!
-
The quality of a HDMI cable should be
they’re zero bit errors.
-
So anything that has non-zero bit errors
we chop up and throw away.
-
This gets interesting
when you have very long cables.
-
We can actually see that
the longer the cable is
-
the harder for them
to keep zero bit errors.
-
So yes, we can kind of judge
the quality of the cable.
-
But it’s also hard because
-
it depends on what the sender is doing.
-
If the sender is of a lower quality
-
and the cable is low quality
you might get bit errors.
-
But if the sender is of a high quality
-
and the cable is of a lower quality
-
they might cancel each other out
and still be fine.
-
We can’t just go: “This is a good cable”
-
because we don’t actually have
any control over our…
-
how powerful our sender is on this device.
-
If we could kind of turn down the sender
-
and see where things start going wrong
that would be pretty cool.
-
If anybody wants to
look at building such a device
-
I’d love to help you do that.
-
Herald: We have another question
from microphone number 5.
-
Question: Your hardware,
the HDMI2USB hardware…
-
Tim: Yes!
Question: Is it available for simply ordering
-
or has it to be solder soldered by hand,
or…
-
Tim: You can not solder this board by
hand unless you are much, much better
-
than I am. It uses Ball Grid Array parts
because it’s an FPGA.
-
This is one here.
You can buy them.
-
We’re working with a manufacturer in India
who builds them for us.
-
We work with them,
and it was pretty awesome.
-
We’re also working on new hardware.
I’ve got a whole bunch
-
of FPGA hardware here
that you can come and have a look at
-
and I’ll probably move it out
into the hallway afterwards.
-
Again, if you’re interested
in the hardware and you have a use case,
-
chat to me!
Because I like to solve problems
-
of people who are not having hardware
and my employer pays me too much.
-
So I get to use my discretion refunds
-
for helping out people
doing open source stuff.
-
applause
-
Herald: We have at least four more
questions. Microphone number 2, please!
-
Question: Do you think it would be
possible to get an 1080p image
-
out of the open source
hardware board you use?
-
Tim: Yes, I do, but it requires
us to do some hard work
-
that we haven’t had time to do yet.
-
And for us 720p at 60 frames
per second is good enough.
-
The USB connection
-
is limited in bandwidth because
we don’t have an H.264 encoder,
-
we only have MJPEG. If somebody wants
to write us a open source, say, WebM
-
rather than H.264 encoder
-
that might start become more interesting.
We also have ethernet, Gigabit ethernet,
-
on this board. It should be pretty ease
to stream the data out the ethernet.
-
I, again, need help.
-
The ethernet controller works.
We can telnet into the board
-
and control it via Telnet.
We just need somebody to
-
actually connect the data,
the high speed data side up.
-
We use it for debugging and stuff.
-
Mike “Hamster” Field again,
really big thank you to him,
-
he is an amazing designer,
-
he built 1080p60 that is
a little bit out-of-spec
-
but actually works really well on hardware
-
that is almost identical to ours.
He also did the DisplayPort,
-
like a 4k-DisplayPort which
we can do on our board.
-
If you only need one or two
of the 1080p things
-
DisplayPort connectors can be
converted to HDMI quite easily
-
and you can do that on them.
-
Yes, I think that’s possible,
but again:
-
open source … hobbyist …
need developers …
-
Herald: We’ll take one question
from the internet!
-
Signal Angel: Thank you. Have you
considered JPEG2000?
-
Tim: No, I’ve not. And the main reason
is that I want to be a webcam.
-
I want to pretend to be a webcam. The UVC
standard, which is the USB webcam standard,
-
does not support JPEG2000.
-
There’s no reason we couldn’t support
JPEG2000 when connected to Linux,
-
like we could fix the Linux driver
to add JPEG2000 support.
-
Again, I don’t know if there’s any good
open source FPGA implementations
-
of JPEG2000? So, that’s also a blocker.
-
But if you’re interested in helping out
– come and talk to me.
-
As I said, I would very much love
-
to chat to you and solve
the problems you’re having
-
with getting-going.
As well, we have t-shirts.
-
I’m wearing a t-shirt that we have, and
I will send everybody who contributes
-
a t-shirt. Whether that’s fixing our
website, helping on documentation,
-
helping people on IRC
getting setup, anything.
-
You don’t need to be an expert
on FPGA stuff to help out.
-
And we also are working on
a little project to run MicroPython
-
on FPGAs. So if you’re really into Python
and you like MicroPython
-
I would love to help you help us do that.
-
It’s kind of working. We just need more
powerful (?) support. So.
-
Herald: We have two more questions from
microphone number 1.
-
Question: So, is there some sort of
dedicated processor on that board,
-
or do you use like a Microblaze
in the FPGA?
-
Tim: We use an open source soft core.
One of three.
-
We can change which soft core
we’re using with a command line flag.
-
We’re using either the LatticeMico32
-
which is produced
by Lattice Semiconductor.
-
We can use the OpenRISC-1000
-
or we can use a RISC-V processor.
-
We generally default to LM32
because it’s the most performance
-
for least FPGA resource trade-off.
-
But if you like RISC-V
or OpenRISC-1000 better
-
for some reason, say, you want
to run Linux on our soft core,
-
then you can do that. With a one line
command line change, yeah!
-
We’re looking at adding
J-Core support in early next year.
-
J-Core is quite big, though,
compared to LM32. So,
-
it probably won’t fit on some
of the very small devices.
-
Question: So it’s a Lattice FPGA?
-
Tim: It’s a Spartan6 FPGA. And our new
boards will probably be Artix-7
-
But we’re still in the process
of making them exist yet.
-
Question: Thanks.
Tim: I’ve also been working with
-
bunnie’s NeTV2, porting
our firmware to that,
-
which has been really awesome.
He’s doing some cool work there,
-
and he’s kind of inspired this whole
development by showing that,
-
yes, you could do this,
and you shouldn’t be scared of it.
-
Herald: Good, one more question
from microphone number 1.
-
Question: Yes. Do you have any plans for
incorporating HD-SDI into your platform?
-
Tim: Yes and no!
We have plans and ideas
-
that we could do it
-
but HD-SDI
-
and all of the SDI protocols are
much harder for the consumer,
-
generally to access, and we want
to drive the costs of this down
-
to as low as it can go. And…
-
HDMI is a consumer electronic thing.
You get it on everything.
-
You get it on your
like five-buck Raspberry Pi.
-
HDMI is probably
a really good solution for this.
-
We haven’t developed any SDI cores
or anything like that,
-
so I can’t tell you like
that we’re doing anything there
-
but if somebody’s interested, again,
I like to remove roll (?) blocks and
-
we would love to have people work on that.
-
Herald: We have one more question from
the internet and we have two minutes left.
-
Signal Angel: OK, thank you. The question
is not related to HDMI but to FPGAs.
-
FPGAs are programmed in a high level
language like VERILOG or…
-
after simulation you compile. So every
vendor has created his own compiler
-
for its own hardware. Are you aware of
a move to open source compilers
-
or to independent hardware? And do you see
a benefit in open source FPGA compilers?
-
Tim: Yes! If anybody knows
-
about FPGAs you know
they use proprietary compilers.
-
And these proprietary compilers
are terrible.
-
I’m a software engineer.
If I find a bug in gcc
-
I can fix the bug. I’ve got those skills,
and I can move forward or at least
-
figure out why the hell the bug occurred.
-
That is not the case with FPGA compilers.
The FPGA compiler we use
-
is non-deterministic. You can give it
the same source code and it produces
-
different output. I’d love somebody
to reverse-engineer why that occurs
-
because I’ve removed all the randomness
from random sources from it
-
and it still manages to do it!
I’m really impressed. So,
-
Clifford has done an open source
FPGA tool chain
-
for the Lattice iCEstick things.
-
He said he’s gonna work
on the Actrix7 FPGAs.
-
Please donate to him and help him.
I would… like…
-
if that exists I owe people who…
like a bazillion of beers, because
-
the sooner I can get off proprietary
tool chains the happier I will be,
-
and it will make my hobby so much nicer.
So, please help him!
-
Herald: And do give Tim
a big round of applause!
-
applause
-
postroll music
-
subtitles created by c3subtitles.de
in the year 2017. Join, and help us!