Alright. Let's do some presents.
How this is going to work
is that the presenters get
roughly two minutes
to show their present.
You get to applaud.
Questions at the very, very end
if we have time.
So, we're going to start
with Joachim.
Whos going to talk about
the 20th Century Press Archives.
Thanks.
I'm presenting the first part
of a data donation by ZBW
from the 20th Century Press Archives,
which is to our best knowledge
the largest public newspaper clippings
archived in the world.
It has existed from 2008 to--
from 1908 to 2005.
And it evaluated more
than 1,500 periodicals
from Germany and from all over the world.
The material was organized
in folders
as you see here.
From a small corner
from a persons archive,
and 25,000 folders
with more than 2 million articles
and digitalized pages
are online now.
The integration of the persons archive
metadata to Wikidata
has been completed most recently.
And all of the more
than 5,000 person folders
are accessible from Wikidata now.
More than 6,000 facts
sourced from the person's archive metadata
has been edited Wikidata
and this includes rather
complex relations
like between persons and companies,
and their role in the company.
The next big challenge will be
the countries and categories archive
with more than 9,000 folders
which is organized by--
Yes?
A hierarchy of countries
and hierarchy of categories.
It's a whole system
of knowledge organization
about the whole world.
Materialized in newspaper clippings
and to match this in data
is a challenge,
so please consider to join
the WikiProject
20th Century Press Archives.
(applause)
Thank you Joachim.
Alright.
Lucas.
Stage is yours.
Hello.
I'm presenting two things,
so I get four minutes--
I've been told I've hacked the system.
(laughing)
So...
The first thing is
on behalf of Wikimedia Germany,
which is first version
of Lua support for lexemes.
(audience) Whooah!
(loud applause)
So you can see some Lua code here
which is there's probably
not enough time to read that,
and I'm not a great Lua programmer anyway,
but the result is down there.
We have access to the lexemes,
forms, the census,
also statements which are not
in the screenshot.
And it's not deployed anywhere yet
so that was just on my local Wiki.
(laughing)
But we're hoping to get it
at least to beta soon.
Probably to test WikiData
pretty soon afterwards
and then we'll see
where it goes,
and it's a start at least.
Thanks.
(applause)
And the second thing
I'm doing as a volunteer
so there's--
I made this tool a while ago
called Wikidata Image Positions.
So if you have a statement on item
that it depicts something,
for example a paintinng
could depict a person,
you can add a qualifier there
saying that this--
where in the image this is so--
like this person
is in the upper left corner
of the image or something,
and that is now supporting structured data
on Commons as well.
And if the presents page is open
somewhere...
No, not like that. I'm very sorry.
We can change that.
Yes, or read my emails.
(laughing)
(audience 1) So much unread media.
(laughing)
- That one.
- There you go.
So, there it is.
This is going to be a picture I took
earlier this year,
and there's already some
structured data here
that says depicts
certain pride flags,
and once this loads,
I can define the region.
There we go,
and this also now used the same library
as crop tool instead
of my home grown bad thing
which I think Andy was
asking for years ago,
and now it's finally done.
And I say use this region.
It's adding a qualifier.
Let's do the same thing
over here.
Just roughly drawn with a mouse.
That should be good enough.
And the third one.
There we go.
Use this region.
And now if we check Lydia's contributions
- except on Commons.
- (laughing)
Media.org
She gave me permission
to do this by the way.
(laughing)
User contirbutions.
Where you can see
some new qualifiers here,
and if we load this,
there's also used script,
which is also hopefully working,
- which shows you these regions correctly
- (crowd) Wooah! (applause)
on Commons.
(applause)
So basically the days of this
old annotation gadgets are numbered.
(laughing)
And that's it I think we can skip like
dozen back-up screenshots here now
and go to the next person.
- Thanks.
- (audience) Woohoo. (applause)
Which one? This one.
Okay,
So we have a Lexeme
uploading bot.
It's not developed by me.
It's developed by Ehuyar Foundation.
And it's developed to upload
Basque language lexemes
with all its forms
because its lexemes has 65 forms.
So, it's not something
that we can do by hand easily.
And also census.
You can download there.
I don't even know how it works.
(laughing)
And all I know is that
it's based in Wikidata Toolkit,
so it's a subversion of that
and it's also in the [inaudible]
of Wikidata, I think.
That's it.
(applause)
- (Lydia) Thank you.
- (audience) Woohoo (applause)
Hello.
I may have to reload this.
Let me just make sure.
Does it work? Yes. Present.
Great.
Now, this is a project
we've been working on for a while,
but we are rolling it out
for the first time
to a big crowd here
at WIkidataCon
and we'll show you some--
a real cool feature
that we didn't tell you about
earlier today.
So with this is a project called
The Wiki Art Depiction Explorer,
and this is an attempt
to try give an interface
better than
just editing a raw Wikidata item
when it comes
to adding depiction information
for artworks,
so this is a project that was funded
by the Night Foundation,
and Wikimedia D.C.
and the Smithsonian,
we worked together on this project
and with the amazing development skills
of Edward Betts, right here.
So this is an example
of what you'll see,
and we invite you all to try it out.
art.wikidata.link
is the URL,
and the idea here is that
you can see a large version of the picture
and we will try to bring in
whatever we can
from the object page
of the institution
that holds this image.
So, we're bringing
in some description information,
so that the person trying
to add depict information,
has some additional readings
that they can have here,
We also bring in some key words,
and the great thing about this
when you start typing in the box,
you are actually given matches
against whatever's been previously
matched in depictions statements.
So, this is a much
tighter controlled vocabulary.
It gives you a much better direction
of what to do.
So here is an example here,
if this works correctly,
we should be able to click that.
And also make more edits
on Lydia's behalf. (laughing)
And we can go in here
and type in ballet,
and you'll see that it doesn't match
everything on Wikidata
but only things that are relevant
based on previous depiction,
statements,
so I can say ballet dancer.
I can go back in there
and add these different--
Oops.
Ooh. Not sure
why it's not working.
Anyway, let me go ahead
and make those edits
and then now that has been committed
and you can actually start
browsing other things.
So the idea is to keep you
in this universe of paintings
and artworks, and not just
punch you back out to Wikidata.
But we also have another
bonus function
that Edward is going to show you.
So, this painting we've got date of birth
and death for this person.
So, we do a search
and these are all the people
that were born and died in those years
and so...
are we at Aurora?
- Yep. Can you see the match?
- Which one is it?
- I didn't see the match.
- Yep, further down.
Maria Aurora. Right there.
- So, if I click on that.
- So, you click on that.
and then scroll down.
And then we go-- Oh it's...
(laughing)
There you go.
Add these to the painting.
(audience 2) You need to highlight
word metrics from the title.
That... it'll come.
(laughing)
So, it worked.
It saved it to the painting.
So it'll match all humans
with those birth date
and the death date,
and you can click that automatically.
- And that's it. So go ahead and try it.
- (audience) Woohoo! (applause)
Oh, those are just some stats.
We had a whole bunch
of people try it already today,
and we upped this number today.
So keep working at it. Thanks.
Next one is Bruno.
Hello.
I'm Bruno from Google
and we are open sourcing
what we call lexical masks.
A Little bit less sexier than the picture
we just saw.
It's a config file that specifies
what a lexeme needs to look like,
what kind of form,
you expect in a lexeme
and what kind of feature you want
on those forms.
Example here, it's German nouns
that will have a gender inherent,
and we'll have a couple of forms
specified here
with a couple of features you expect.
The mask or the text files
that you see here
will be uploaded to Wikidata
so that I can help the Lexeme community
to check consistency
and increase the coverage.
More details on the talk I gave
earlier today
and Lydia's.
Thank you.
(applause)
Yes, so in the past two years,
I have had an hobby
because I was not, let's say,
very happy
with the current SPARQL implementation
especially Blazegraph.
So, during my free time,
I started a project
I called Oxigraph
so, it's basically like Blazegraph
but different.
(laughing)
So, it starts getting in those
states that work
so SPARQL queries were implemented.
But it's not been optimized yet,
currently there is no optimization
of how queries are executed.
So as you're seeing
this small experiment
with some SPARQL queries
the results seem fairly promising.
What is nice is I used the rest
to implement it
and I managed to get the memory footprint
fairly reasonable
as well as some--
Blazegraph origin
or [inaudible] so,
I hope that maybe in the future
I'm going to get
maybe all the people who can
then spend more time to make it
ready working well,
we could have something very good
for at least smaller Wikibase [inaudible]
with a few million
or ten of million [inaudible].
So the repository is here,
so it's not working fairly well.
It's a work in progress
and if you want to test it,
or contribute,
you are much welcome
because it's a big task.
Thank you.
(audience cheers and applause)
I was going to do a live demo
but it didn't go well earlier,
so this is a video,
(laughing)
which...
Wait, I can do this on my phone right?
Access denied.
(singing) Ta-ta-ta-ta-ta
Two minutes.
Well the video is two minutes long.
(laughing)
Ta-ta-ta-ta-ta
(audience 3) We have two people here
from Google.
I'm sure they can come up
with something. (laughing)
Okay, we might just watch
it in this tab.
Come on.
Da-da-da-da-da
Yeah, so I did a live demo
and I thought that would go badly.
But--
(laughing)
So, this is something I've been working on
ever since creating
the doc images two years ago
and this is a sort of
shared platform website
where you can go ahead
and make an account.
Ta-ta-ta-ta-ta
And then you've got a lovely button
on the next page
which allows you to create a Wiki.
There's lot of features
missing at the moment,
so you get to choose a Wiki name,
where it is and a user name at the moment.
But the possibilities are endless,
and it goes and creates a Wiki
in a shared enviroment,
saving on all of us
expensive resources
that we're all spending
running Wikibases.
I recorded this just downstairs earlier
so, this is like, kind of real.
I sped it up slightly but--
You get emailed your media Wiki
temporary password.
You can log in with the user account
that you made.
Da-da-da-da.
Then you have to change your password.
This is where I just copy
part of the URL in.
(laughing)
And then you're logged into your
very own Wikibase
that has quick statements,
a career service,
everything managed for you
that you don't have to worry about.
(audience) Woo! Woo woo woo.
(applauding)
So you can go and create
all of your items, use tools,
and I plan on adding
more tools in the future.
All of the complexities are hidden.
At the moment this is live
on this wbstat.com domain name.
But you need an invitation code from me.
If you want one to try one out
during WikidataCon,
come and talk to me,
and I will give you one.
It's full of bugs at the moment,
and stuff so don't rely on it.
This is quick statements working,
and then on the Saturday
of WikidataCon
I'll delete all of the data,
and then we'll be using this
for the Wikibase workshops
on Sunday if any of you are attending.
And then it will get a real test,
and so then you can see
the two edits have happened.
I'm so glad I didn't do this
as live demo,
and then you go to the query service.
You type in the query
that shows you all of the triples.
Even in the recording, you do it wrong.
(laughing)
And then you have all your triples.
(audiences cheers and applause)
Alright.
So, Happy Birthday Wikidata!
I started on this last year,
actually for the Wiki site meeting
which was about a year ago, and--
got something running
and got a lot encouragement
from Daniel Mietchen
who's probably watching online.
Hi Daniel.
(laughing)
And he's given me all sorts of ideas
for improving it,
and so just in time for this meeting,
I've got a new version out
that does more.
So basically what this is
is a tool to replace
in mostly scientific articles
but any work really
where there's an author--
author name string,
replace that with an actual
author item which is--
pulled from Wikidata using
some various things to match
to that properly.
So changes are most recently
you can log in with your Wikimedia
user account
and do the edits directly
rather than previously
it all just went through quick statements
which I've got about a million
quick statements edits now.
laughing)
This is a llittle bit past here
and bypasses that.
Another new thing
is you can actually go in
and look at a work,
and update all of the authors
on that work at once.
So there's a match button.
You can also rearrange the author list
if they're out of order or something.
There's also---
Oh yeah, this is an example
of what that looks like
so when you're matching
it up with authors it lists...
some information about their affliation
as it is in Wikidata.
The other thing that's new
is some automatic filtering.
If you go to the bottom of a page
that is a search for an author name,
you'll see links to coauthors,
links to other--
to the topics they've written on,
links to their journals
and so you can filter,
and narrow down the list
of your listed works
that you're looking at
to just those things
that have those particular
features in them.
Anyway, that's what's new there.
And that's it. Thank you.
(cheers and applause)
Hi all.
I'm also from WMDE
and a volunteer,
but this is volunteer work.
It's a tool MachtSinn.
A few of them might...
a few might already seen it,
but I improved it greatly
in the last week.
So we have these lexemes nowadays
and on these lexemes
you should add the sense
what the word means,
and all the different meanings
a word can have,
and we have a lot of lexemes now
that still are missing senses,
that don't have any sense
and... (laughing)
in a lot of cases,
we also have items
about the concept that
this sense is about,
and so I thought we could mind
the senses,
and this is what MatchtSinn does.
It shows you for a lexeme
and this case the English
word tune,
which is a verb
and possible meaning.
In this case: short instrumental piece,
a melody and you're asked
is this a meaning for this word,
and if you click the blue button
it will save it to Wikidata.
And if you click the right button,
it will throw it away.
And you login to--
the tool with your Wikidata account.
We are OAuth.
And a few people have already using it
and added 6,000 senses,
and there are currently
about 40,000 senses
waiting to be considered,
and tested,
and also for writing this
I had to first write some Python tool
to modify Lexemes because
Pi Wiki bot
and the other common tools
don't support that.
Yes, thanks.
(applause)
Hi, Happy Birthday.
(laughing)
Happy Birthday!
(audience) Woo!
(laughing)
Are you eating cake
during my presentation?
Okay, what to expect from a Data Scientist
for [inaudible] dashboard, of course.
Okay, so this time--
well, there's the end product. (laughing)
This time something called
Wikidata Languages Landscape.
So--it's is a dashboard
as I said
so some of the empirical findings
that you present
through the Wikidata statistics
were already serviced today
in Lydia's talks,
and basically focuses
on the structural organization
of languages in Wikidata,
on the similarity of Wikidata languages
in respect how they're reused across
the Wikimedia Foundation projects, right?
And it also combines
some of the external resources
with those statistics in order
to provide for a comprehensive view
of how different languages cope
in this Wikimedia universe.
So, there's a link to the dashboard
so I was warned not to do this,
but I will try. (laughing)
Sorry.
(laughing)
So depending on the--
Yes. Yes! It can be done. (laughing)
Okay, so I will be even able
to do a live demo maybe...
Okay, come on, come on, come on.
It's still computing.
It's a very complicated service.
Yep, here we go.
Okay, the first thing
that you will be able to see.
Okay. Yes alright.
(audience) Wow...
Wow. As I said a Data Scientist,
so this is not really informative, right?
Okay.
So here you have all the languages,
or most of languages in WikiData
and we're focusing on
those languages
that were ever used anywhere
in Wikimedia, okay?
So, we're talking about languages
that actually have labels for things
that are mentioned in--
across Wikipedia, Wikivoyage
and other projects, right?
So, and this is only
a subontology of languages like so.
This depicts only the instance
of [inaudible] in [inaudible]
So that you can actually use
probably these tools here
to browse this thing.
It's not aesthetically pleasing
but at least it's complete.
One of the byproducts
of this work is--
sorry, not this thing,
but this thing here.
So, this is a small visual browser
that can help you
figure out what is wrong
with the ontology of languages
in Wikidata,
and if you want to fix something
it makes it easier for you to find.
So, while working on this thing.
I figured out that the languages
ontology's particularly complex,
really complicated, okay.
And then there's some inconsistency
there for example.
Well at least in my intuitive
understanding of semantics,
you can't be at the same time
a part of something
in a subclass of something.
I mean you can,
and Wikidata is really flexible enough
to allow you to do that,
but probably some things
need fixing in that respect,
and here for example,
you can find the language,
say, for example, Serbo-Croation.
It used to be my native language
before it fell apart
into Serbian, Croatian, Bosnian, etc.
Okay, and here you have
all the relations
like P31 part of subclass [inaudible]
different marks.
So, if anything needs to be fixed
instead of browsing the whole--
the whole structure
of the whole ontology
you can go here
and just make it shorter, right?
And then things like plastering
the language
many people like this on Twitter.
It actually cost me
half of my life
to produce this thing.
Okay, it's huge.
So yeah, this is the dashboard
to go play
many interesting things.
Thank you very much.
(applause)
So, this is--
This presenation is called
WikiShape
and this is something
that we have been working
in the Shape Expressions community group,
and the idea of WikiShape is that
we want to have like
the Wikidata query service
which is I think is something
that most of you are using
to do SPARQL queries,
but now we want to do
the same thing
but for Shape Expressions.
So we wanted an editor
which is as easy to do,
and to work with it
as it is with Wikidata query service.
That's what we are going to--
that's the world of WikiShape
so you have Shape Expressions
editor and validator.
You also have syntax highlighting,
auto-completion,
schema visualiztion,
and search.
So this is just a screen.
Well, this is--
I could click on that
but I prefer not to do that.
You can have info about schema.
You can have sign information
about the schema.
You can visualize the schema.
This is--you can autocomplete.
For example just start writing work
and it finds the schemas for that.
Then, you also have the editor
and as you can see,
this is for written work.
You can have this editor
of the schema of the Shape Expression.
If you hover with a mouse,
it highlights the name of the label
of the property
which is the same
as the Wikidata Query service
so the goal is that
now that you have Shape Expressions,
the goal is that you are
using Shape Expressions
to validate your data
to increase the quality
of the Wikidata--data...
using Shape expressions.
And also, you also can visualize
the schemas.
Once you have a schema
for example for written work,
the author or whatever
you can visualize,
and all that.
So, that's the goal of WikiShape.
(applause)
I love Wikidata.
I'm very proud of the work I do
and my friends do on Wikidata,
and I know most of you are pleased
to work on Wikidata as well.
It's come to my attention
over the last couple of days
that a couple of you
are working on a rival product
(laughing)
and undermining what happens
on Wikidata.
This product is apparently called,
"Wiki-dah-ta" (enunciates the 'a')
(laughing)
I've never heard
of this "Wikidata" before.
So, in order to get things correct
because some of you are doing it wrong.
(laughing)
If I can find the mouse pointer,
how do I open this?
Here we go.
We have here.
(laughing)
(computer) Wikidata.
You probably can't hear.
That is very, very quiet.
Let me do that again.
- (audience) No the other one.
- The one next to it.
(Andy) Oh yep, I got you.
You're not going to hear this anyway.
(computer voice) Wiki-day-ta.
(laughing)
So, there it is for the record.
(applause)
But, okay.
Joking aside if those of you
who do have speech impediments
would like to make a version
of your pronunciation,
then please feel free.
Thank you. (laughing)
(applause)
So, I thought I was--
I had the laziest present
but then Andy beat me to it.
(laughing)
So, because I literally made this present
an hour ago.
Some of you might know VizQuery
which is a tool I made
to visually query "Wiki-dah-ta,"
and now I saw this tweet from Maarten
just an hour ago
saying, "Hey, there's a preview
of the Commons Query Service"
so I thought what would happen
if I would just change my SPARQL
end point,
and my tool to the beta
Commons SPARQL end point,
and just add it to my tool,
of course.
Now we need to wait for the wifi.
I should have made a video
but of course,
given that I just had an hour,
here we are.
So, for those of you
who don't know VizQuery,
it allows you to do things like say,
"depicts"
and say, it "depicts a cat,"
and so what you get
are all the Wikidata items
that depict a cat with pictures.
However, what you can do now
is you go all the way down
there's a link saying
use the Wikimedia Commons SPARQL
endpoint experimental.
And now when I say "depicts a cat,"
you will actually get
- Commons images of cats.
- (audience) Woo!
(applause)
So, let's say you want a cat
that actually shows its whiskers.
Now we're going to get...
That's it.
So, well. Thank you.
(laughing)
(applause)
An hour ago well--
(snickers)
That was ten minutes ago.
(laughing)
So, sometime ago I did this tour,
The Wikimedia hackathon in Prague
called inteGraality
making dashboards
of property coverage,
and I introduced to you
the service pack update 2019.
(laughing)
So this is InteGraality,
so you haven't seen it yet.
It makes things like this--
ah it's cute--
or paintings
and their columns of properties
and lines or different groupings.
So, how to slice and dice the data,
and I bring you a couple of improvements
that are going to be live-demoed.
So some people wanted
to be able to query for qualifiers
because some properties
are not top level.
So if we do this...
(laughing)
and...
also some people wanted to display images,
which is I guess is not
the greatest display.
Alright. Loading.
It's supposed to be fast.
Supposed to be fast (laughs)
I know it works
because I already did it.
Yes, updated page.
Yep and that works.
Now the street number.
That worked. (applause)
Pictures, maybe
you're going to make it.
Ah. Nah.
Well, everything [inaudible].
(laughing)
really works.
(laughing)
Yep. Yep. That worked.
Yeah, also works with images.
(audience) Woo!
(applause)
So there were two picture requests
but they were not the worst
and this one was literally done--
Oh what could be that link here?
I wonder.
Okay. Not this one is going
to be too big.
Let's go for yeah--
this is going to be fine.
Yeah, the reason why I spend
my entire time
at the conference doing this
is because I spend the last few weeks
writing tests for all the code
that I wrote in Prague,
and it's like--
Oh, so what could these links be?
Yeah, these are the items
that have the property,
and if you go the other one
that are the items
that don't have the property,
so you can actually make this--
dashboard completely blue,
if you spend enough time.
(laughing)
Yeah. That's the service pack update.
(applause)
Oh, we're at the end of the slides.
Now we're taking the people
who didn't give me slides.
(laughing)
Alright.
Who would that be?
I know one.
(laughing)
Are there other people?
(presenter) I added something
at the back--
Uh huh. Two. okay.
Alright. Amir, you go.
Hello. Sorry for a late minute
presentation.
One reason is that
the dashboard was broken
but we were able to fix it.
So a lot of us use Wikidata,
and you see sometimes
it's a little bit slow
when you want to load a page
especially when the item
is very, very big.
So, in the last month--
several people
at Wikimedia Deutschland
like Rosalie, Jakob and me
started working on it,
and improved the performance of Wikidata.
So now we have something to show
it to you.
So I will go to www.wiki [inaudible]
Where is the slash--German keyboard...
(laughing)
Ah Shift + 7.
Ah yeah.
And...
you go to--
so this is called a--
speed index.
This is a speed index
of item of Berlin,
and you see in the past 40 months
it went from 90---
which is defined as--
let me read it out loud.
The speed index is the average time
of a visible part
of a page or display.
It's express in miliseconds
and depends on the site of--
So it used to be around
1 second for item of Berlin.
Now it's around 800 milliseconds,
and this happens not just
on item of Berlin,
but all items and not just on all items--
plus all images and comments.
All of them got better
for 200 miliseconds.
(whistles)
(auidence) Woo hoo!
(applause)
Hi everyone.
So you may remember
from a previous presentation the Hub,
which is a tool to browse the web
with URLs
going through Wikidata
as the hub.
So you could do things like
going from [inaudible] identifier
to some other identifier.
I don't remember what P9,
1,9,3,8 is... (laughing)
but, yeah... Gutenberg. (laughs)
So... (laughing)
So yeah if you know
those identifiers,
you can go from somewhere
to somewhere else,
and, well, do different things.
Go from like can resolve...
Twitter username on Wikidata
and get redirected to the closest
to Wikipedia article.
But not that--
not that many people use
URLs to browse the web
so I thought if people don't
come to the tools
the tools come to them,
and so I did a little script,
and that you will find there at--
da da dum dum
This on meta
which basically takes the identifiers...
from the Hub
to bring them
to your Wikipedia article.
So, if you add the gadgets
you will have a page
that will,
instead of having just those few
things on the side bar,
because it's not enough
to browse the web.
You will have a collection
of... (laughing)
additional links (laughs)
to all over the web,
and so here you will find, for example--
(grunts)
So this is the page for Berlin
and you will have, for example,
Berlin on an open street map,
Berlin on Quora,
Berlin on Swedish Anbytarforum.
(laughing)
Anything and so all those
convenience links
added to every page
that can be resolved
to a Wikidata identifier.
Thank you.
(applause)
So I didn't understand
at the beginning quite much
the format of this thing.
(laughing)
So I just smuggled inside
And I will try to improvise a lot.
So...
No.
(audience) It's loading.
It's loading? Okay.
So... oh yeah it's maybe only a bit slow.
So, I mean there is a lot of data
in Wikipedia...
so a lot of text which contains
in fact information
that you could add to Wikidata,
but it's sometimes difficult to find,
or difficult to import
so what we did is
we used the latest machine
learning algorithm
to given a class, for example,
newspapers,
check which are the newspapers--
sorry the most used properties
from newspapers
like the owner, the publication date,
the language,
and we are going to the corresponding--
if the statement is missing
in the Wikidata item,
we are going to the Wikipedia page
and we're searching automatically
for this missing statement,
and we are proposing to the user
a new fact.
So the user has just to say
yes or no
to this new fact,
and import it to WIkidata.
So, unfortunately,
this web page is too big
to load or the internet connection
is too slow.
So I'm sorry for that
but we will make a tweet soon,
and launch this tool,
and I think it would be a very good tool
to very quickly add
a lot of statements in WIkidata
about entities
that you are not even aware of.
Okay. Thank you very much.
I'm sorry for...
(applause)
How do I get back to the page?
Yes. So one thing that I did
earlier this year to--
and it's called the Revamp
of Wiki Loves Monuments in Brazil
which was the most successful
Wiki Loves Monuments
that's happened in Brazil
[inaudible]
is create this little box here.
So this is pulling information
from Wikidata.
It's replacing the old style
monument IDs
on Commons which is [inaudible].
So it pulls everything from Wikidata,
multilingual of course.
You need to define the Q id
in this case,
but one thing that changed today,
thank you very much
to the Structured Data on Commons team,
is that they're been able
to Lua access
to Structured Data on Commons.
(audience) Yeah!
(applause)
It's fantastic.
So now what you can do
is you can say,
this picture of telescopes--
sorry I like telescopes--
and has Mark II Telescope
and Lovell Telescope in the U.K.,
and if you go to the file information,
and edit the page,
you will see that that--
sorry you probably can't see it so easily.
You just need to do Monument ID/SDC
and you get that information
automatically
through the Structured Data on Commons.
I think this is the first template
that I can actually do this so--
and because it's only
become available today.
So thank you very much
to Structured Data on Commons.
(applause)
Do we have anyone else
who would like to present something?
If not, then thank you so much
just for awesome presents.
Thank you so much for putting
all the time in to them.
They were really great.
(applause)