< Return to Video

cdn.media.ccc.de/.../wikidatacon2019-19-eng-Lightning_talks_2_hd.mp4

  • 0:06 - 0:08
    Hi, guys! Can everybody hear me?
  • 0:09 - 0:12
    So, hi! Nice to meet you all.
    I'm Erica Azzellini.
  • 0:12 - 0:15
    I'm one of the Wikimovement
    Brazil's Liaison,
  • 0:15 - 0:18
    and this is my first international
    Wikimedia event,
  • 0:18 - 0:21
    so I'm super excited to be here
    and I hopefully,
  • 0:21 - 0:24
    will share something interesting for you
    all here on this lengthy talk.
  • 0:25 - 0:30
    So this work starts with research
    that I was developing in Brazil,
  • 0:30 - 0:34
    Computational Journalism
    and Structured Narratives with Wikidata.
  • 0:34 - 0:36
    So in journalism,
  • 0:36 - 0:40
    they're using some natural language
    generation software
  • 0:40 - 0:41
    for automating news
  • 0:41 - 0:47
    for news that have
    quite similar narrative structure.
  • 0:47 - 0:52
    And we developed this concept here
    of structured narratives,
  • 0:52 - 0:55
    thinking about this practice
    on computational journalism,
  • 0:55 - 0:58
    that is the development of verbal text,
    understandable by humans,
  • 0:58 - 1:01
    automated from predetermined
    arrangements that process information
  • 1:01 - 1:05
    from structured databases,
    which looks like that,
  • 1:05 - 1:10
    the Wikimedia universe
    and on this tool that we developed.
  • 1:10 - 1:14
    So, when I'm talking about verbal text
    understandable by humans,
  • 1:14 - 1:16
    I'm talking about Wikipedia entries.
  • 1:16 - 1:18
    When I'm talking about
    structured databases,
  • 1:18 - 1:20
    of course, I'm talking about
    Wikidata here.
  • 1:20 - 1:23
    And predetermined arrangement,
    I'm talking about Mbabel,
  • 1:23 - 1:24
    that is this tool.
  • 1:25 - 1:31
    The Mbabel tool was inspired by a template
    by user Pharos, right here in front of me,
  • 1:31 - 1:33
    thank you very much,
  • 1:33 - 1:39
    and it was developed with Ederporto
    that is right here too,
  • 1:39 - 1:41
    the brilliant Ederporto.
  • 1:43 - 1:44
    We developed this tool
  • 1:44 - 1:48
    that automatically generates
    Wikipedia entries
  • 1:48 - 1:51
    based on information from Wikidata.
  • 1:53 - 1:58
    We actually do some thematic templates
  • 1:58 - 2:01
    that are created on the Wikidata module,
  • 2:02 - 2:04
    WikidataIB Module,
  • 2:04 - 2:08
    and these templates are pre-determined,
    generic and editable templates
  • 2:08 - 2:10
    for various article themes.
  • 2:10 - 2:15
    We realized that many Wikipedia entries
    had a quite similar structured narrative
  • 2:15 - 2:19
    so we could create a tool
    that automatically generates that
  • 2:19 - 2:22
    for many Wikidata items.
  • 2:24 - 2:29
    Until now we have templates for museums,
    works of art, books, films,
  • 2:29 - 2:31
    journals, earthquakes, libraries,
    archives,
  • 2:31 - 2:35
    and Brazilian municipal
    and state elections, and growing.
  • 2:35 - 2:39
    So, everybody here is able to contribute
    and create new templates.
  • 2:39 - 2:44
    Each narrative template includes
    an introduction, Wikidata infobox,
  • 2:44 - 2:46
    section suggestions for the users,
  • 2:46 - 2:50
    content tables or lists with Listeria,
    depending on the case,
  • 2:50 - 2:54
    references and categories,
    and of course the sentences,
  • 2:54 - 2:56
    that are created
    with the Wikidata information.
  • 2:56 - 2:59
    I'm gonna show you in a sec
    an example of that.
  • 3:00 - 3:06
    It's an integration with Wikipedia,
    integration with Wikidata,
  • 3:06 - 3:09
    so the more properties properly filled
    on Wikidata,
  • 3:09 - 3:12
    the more text entries you'll get
    on your article stub.
  • 3:13 - 3:16
    That's very important to highlight here.
  • 3:16 - 3:19
    Structuring this Wikidata
    can get more complex
  • 3:19 - 3:22
    as I'm going to show you
    on the election projects that we've made.
  • 3:22 - 3:27
    So I'm going to let you hear this
    Wikidata Lab XIV for you
  • 3:27 - 3:29
    after this lengthy talk
  • 3:29 - 3:32
    that is very brief,
    so you'll be able to choose
  • 3:32 - 3:35
    on the work that we've been doing
    on structuring Wikidata
  • 3:35 - 3:36
    for this purpose too.
  • 3:37 - 3:40
    We have this challenge to build
    a narrative template
  • 3:40 - 3:44
    that is generic enough
    to cover different Wikidata items
  • 3:44 - 3:46
    and to suppress the gender
  • 3:46 - 3:50
    and the number of difficulties
    of languages,
  • 3:52 - 3:54
    and still sounding natural for the user
  • 3:54 - 3:59
    because we don't want to sound like
    it doesn't click for the user
  • 3:59 - 4:01
    to edit after that.
  • 4:02 - 4:08
    This is how the Mbabel looks like
    on the bottom form.
  • 4:08 - 4:15
    You just have insert the item number there
    and call the desired template
  • 4:15 - 4:22
    and then you have article to edit
    and expand, and everything.
  • 4:22 - 4:27
    So, more importantly, why we did it?
    Not because it's cool to develop
  • 4:27 - 4:31
    things here in Wikidata,
    we know, we all hear, know about it.
  • 4:31 - 4:36
    But we are experimenting this integration
    from Wikidata to Wikipedia
  • 4:36 - 4:39
    and we want to focus
    on meaningful individual contributions.
  • 4:39 - 4:43
    So we've been working
    on education programs
  • 4:43 - 4:45
    and we want the students to feel the value
  • 4:45 - 4:47
    of their entries too, but not only--
  • 4:47 - 4:49
    Oh, five minutes only,
    Geez, I'm gonna rush here.
  • 4:49 - 4:51
    (laughing)
  • 4:51 - 4:54
    And we want you all to make tasks
    for users in general,
  • 4:54 - 4:58
    especially on tables
    and this kind of content
  • 4:58 - 5:00
    that it's a bit of a rush to do.
  • 5:02 - 5:06
    And we're working on this concept
    of abstract Wikipedia.
  • 5:06 - 5:09
    Denny Vrandečić wrote an article
    super interesting about it
  • 5:09 - 5:12
    so I linked here too.
  • 5:12 - 5:15
    And we also want to now support
    small language communities
  • 5:15 - 5:18
    to fill the lack of content there.
  • 5:19 - 5:24
    This is an example of how we've been using
    this Mbabel tool for GLAM
  • 5:24 - 5:26
    and education programs,
  • 5:26 - 5:30
    and I showed you earlier
    the bottom form of the Mbabel tool
  • 5:30 - 5:34
    but also we can make red links
    that aren't exactly empty.
  • 5:34 - 5:36
    So you click on this red link
  • 5:36 - 5:39
    and you automatically have
    this article draft
  • 5:39 - 5:42
    on your user page to edit.
  • 5:43 - 5:49
    And I'm going to briefly talk about it
    because I only have some minutes more.
  • 5:50 - 5:51
    On educational projects,
  • 5:51 - 5:57
    we've been doing this with elections
    in Brazil for journalism students.
  • 5:57 - 6:02
    We have the experience
    with the [inaudible] students
  • 6:02 - 6:05
    with user Joalpe--
    he's not here right now,
  • 6:05 - 6:08
    but we all know him, I think.
  • 6:08 - 6:12
    And we realize that we have the data
    about Brazilian elections
  • 6:12 - 6:15
    but we don't have media cover on it.
  • 6:15 - 6:18
    So we were lacking also
    Wikipedia entries on it.
  • 6:19 - 6:23
    How do we insert this meaningful
    information on Wikipedia
  • 6:23 - 6:25
    that people really access?
  • 6:25 - 6:28
    Next year we're going
    to have some election,
  • 6:28 - 6:31
    people are going to look for
    this kind of information on Wikipedia
  • 6:31 - 6:32
    and they simply won't find it.
  • 6:32 - 6:36
    So this tool looks quite useful
    for this purpose
  • 6:36 - 6:40
    and the students were introduced,
    not only to Wikipedia,
  • 6:40 - 6:43
    but also to Wikidata.
  • 6:43 - 6:47
    Actually, they were introduced
    to Wikipedia with Wikidata,
  • 6:47 - 6:51
    which is an experience super interesting
    and we had a lot of fun,
  • 6:51 - 6:53
    and it was quite challenging
    to organize all that.
  • 6:53 - 6:55
    We can talk about it later too.
  • 6:55 - 6:59
    And they also added the background
    and the analysis sections
  • 6:59 - 7:02
    on these elections articles,
  • 7:02 - 7:05
    because we don't want them
    to just simply automate the content there.
  • 7:05 - 7:07
    We can do better.
  • 7:07 - 7:09
    So this is the example
    I'm going to show you.
  • 7:09 - 7:13
    This is from a municipal election
    in Brazil.
  • 7:16 - 7:17
    Two minutes... oh my!
  • 7:19 - 7:23
    This example here was entirely created
    with the Mbabel tool.
  • 7:23 - 7:29
    You have here this introduction text.
    It really sounds natural for the reader.
  • 7:29 - 7:32
    The Wikidata infobox here--
  • 7:32 - 7:35
    it's a masterpiece
    of Ederporto right there.
  • 7:35 - 7:37
    (laughter)
  • 7:37 - 7:42
    And we have here the tables with the
    election results for each position.
  • 7:42 - 7:46
    And we also have these results here
    on the textual form too,
  • 7:46 - 7:52
    so it really looks like an article
    that was made, that was handcrafted.
  • 7:54 - 7:58
    The references here were also made
    with the Mbabel tool
  • 7:58 - 8:01
    and we used identifiers
    to build these references here
  • 8:01 - 8:03
    and the categories too.
  • 8:11 - 8:15
    So, to wrap things up here,
    it is still a work in progress,
  • 8:15 - 8:19
    and we have some challenges
    on outreach and technical
  • 8:19 - 8:23
    to bring Mbabel
    to other language communities,
  • 8:23 - 8:25
    especially the smaller ones,
  • 8:25 - 8:27
    and how do we support those tools
  • 8:27 - 8:30
    on lower resource
    language communities too.
  • 8:30 - 8:34
    And finally, is it possible
    to create an Mbabel
  • 8:34 - 8:36
    that overcomes language barriers?
  • 8:36 - 8:40
    I think that's a question
    very interesting for the conference
  • 8:40 - 8:44
    and hopefully we can figure
    that out together.
  • 8:45 - 8:50
    So, thank you very much,
    and look for the Mbabel poster downstairs
  • 8:50 - 8:54
    if you like to have all this information
    wrapped up, okay?
  • 8:54 - 8:55
    Thank you.
  • 8:55 - 8:58
    (audience clapping)
  • 9:00 - 9:03
    (moderator) I'm afraid
    we're a little too short for questions
  • 9:03 - 9:06
    but yes, Erica, as she said,
    has a poster and is very friendly.
  • 9:06 - 9:08
    So I'm sure you can talk to her
    afterwards,
  • 9:08 - 9:09
    and if there's time at the end,
    I'll allow it.
  • 9:09 - 9:12
    But in the meantime,
    I'd like to bring up our next speaker...
  • 9:12 - 9:14
    Thank you.
  • 9:16 - 9:17
    (audience chattering)
  • 9:23 - 9:27
    Next we've got Yolanda Gil,
    talking about Wikidata and Geosciences.
  • 9:28 - 9:29
    Thank you.
  • 9:29 - 9:32
    I come from the University
    of Southern California
  • 9:32 - 9:35
    and I've been working with
    Semantic Technologies for a long time.
  • 9:35 - 9:38
    I want to talk about geosciences
    in particular,
  • 9:38 - 9:41
    where this idea of crowd-sourcing
    from the community is very important.
  • 9:42 - 9:45
    So I'll give you a sense
    that individual scientists,
  • 9:45 - 9:47
    most of them in colleges,
  • 9:47 - 9:50
    collect their own data
    for their particular project.
  • 9:50 - 9:52
    They describe it in their own way.
  • 9:52 - 9:55
    They use their own properties,
    their own metadata characteristics.
  • 9:55 - 9:59
    This is an example
    of some collaborators of mine
  • 9:59 - 10:00
    that collect data from a river.
  • 10:00 - 10:02
    They have their own sensors,
    their own robots,
  • 10:02 - 10:05
    and they study the water quality.
  • 10:05 - 10:11
    I'm going to talk today about an effort
    that we did to crowdsource metadata
  • 10:11 - 10:15
    for a community that works
    in paleoclimate.
  • 10:15 - 10:18
    The article just came out
    so it's in the slides if you're curious,
  • 10:18 - 10:21
    but it's a pretty large community
    that work together
  • 10:21 - 10:24
    to integrate data more efficiently
    through crowdsourcing.
  • 10:24 - 10:29
    So, if you've heard of the
    hockey stick graphics for climate,
  • 10:29 - 10:32
    this is the community that does this.
  • 10:32 - 10:35
    This is a study for climate
    in the last 200 years,
  • 10:35 - 10:38
    and it takes them literally many years
    to look at data
  • 10:38 - 10:40
    from different parts of the globe.
  • 10:40 - 10:43
    Each dataset is collected by
    a different investigator.
  • 10:43 - 10:44
    The data is very, very different,
  • 10:44 - 10:47
    so it takes them a long time
    to put together
  • 10:47 - 10:49
    these global studies of climate,
  • 10:49 - 10:52
    and our goal is to make that
    more efficient.
  • 10:52 - 10:54
    So, I've done a lot of work
    over the years.
  • 10:54 - 10:57
    Going back to 2005, we used to call it,
  • 10:57 - 11:00
    "Knowledge Collection from Web Volunteers"
  • 11:00 - 11:02
    or from netizens at that time.
  • 11:02 - 11:04
    We had a system called "Learner."
  • 11:04 - 11:07
    It collected 700,000 common sense,
  • 11:07 - 11:09
    common knowledge statements
    about the world.
  • 11:09 - 11:11
    We did a lot of different techniques.
  • 11:11 - 11:15
    The forms that we did
    to extract knowledge from volunteers
  • 11:15 - 11:19
    really fit the knowledge models,
    the data models that we used
  • 11:19 - 11:21
    and the properties that we wanted to use.
  • 11:21 - 11:25
    I worked with Denny
    in the system called "Shortipedia"
  • 11:25 - 11:27
    when he was a Post Doc at ISI,
  • 11:27 - 11:32
    looking at keeping track
    of the prominence of the assertions,
  • 11:32 - 11:35
    and we started to build
    on Semantic Media Wiki software.
  • 11:35 - 11:37
    So everything that
    I'm going to describe today
  • 11:37 - 11:39
    builds on that software,
  • 11:39 - 11:41
    but I think that now we have Wikibase,
  • 11:41 - 11:44
    we'll be starting to work more
    on Wikibase.
  • 11:44 - 11:49
    So the LinkedEarth is the project
    where we work with paleoclimate scientists
  • 11:49 - 11:51
    to crowdsource the metadata,
  • 11:51 - 11:54
    and seeing the title that we said,
    "controlled crowdsourcing."
  • 11:54 - 11:57
    So we found a nice niche
  • 11:57 - 12:01
    where we could let them create
    new properties
  • 12:01 - 12:03
    but we had an editorial process for it.
  • 12:03 - 12:04
    So I'll describe to you how it works.
  • 12:04 - 12:10
    For them, if you're looking at a sample
    from lake sediments from 200 years ago,
  • 12:10 - 12:13
    you use different properties
    to describe it
  • 12:13 - 12:16
    than if you have coral sediments
    that you're looking at
  • 12:16 - 12:19
    or coral samples that you're looking at
    that you extract from the ocean.
  • 12:19 - 12:24
    Palmyra is a coral atoll in the Pacific.
  • 12:24 - 12:28
    So if you have coral, you care
    about the species and the genus,
  • 12:28 - 12:32
    but if you're just looking at lake sand,
    you don't have that.
  • 12:32 - 12:35
    So each type of sample
    has very different properties.
  • 12:35 - 12:39
    In LinkedEarth,
    they're able to see in a map
  • 12:39 - 12:40
    where the datasets are.
  • 12:40 - 12:46
    They actually annotate their own datasets
    or the datasets of other researchers
  • 12:46 - 12:47
    when they're using it.
  • 12:47 - 12:50
    So they have a reason
    why they want certain properties
  • 12:50 - 12:52
    to describe those datasets.
  • 12:52 - 12:57
    Whenever there are disagreements,
    or whenever there are agreements,
  • 12:57 - 12:59
    there's community discussions
    about them
  • 12:59 - 13:03
    and they're also polls to decide on
    what properties to settle.
  • 13:03 - 13:06
    So it's a nice ecosystem.
    I'll give you examples.
  • 13:06 - 13:11
    You look at a particular dataset,
    in this case it's a lake in Africa.
  • 13:11 - 13:14
    So you have the category of the page;
    it can be a dataset,
  • 13:14 - 13:15
    it can be other things.
  • 13:15 - 13:21
    You can download the dataset itself
    and you have kind of canonical properties
  • 13:21 - 13:24
    that they have all agreed to have
    for datasets,
  • 13:24 - 13:26
    and then under Extra Information,
  • 13:26 - 13:29
    those are properties
    that the person describing this dataset,
  • 13:29 - 13:31
    added on their own accord.
  • 13:31 - 13:33
    So these can be new properties.
  • 13:33 - 13:37
    We call them "crowd properties,"
    rather than "core properties."
  • 13:37 - 13:41
    And then when you're describing
    your dataset,
  • 13:41 - 13:44
    in this case
    it's an ice core that you got
  • 13:44 - 13:46
    from a glacier dataset,
  • 13:46 - 13:49
    and your'e adding a dataset
    you want to talk about measurements,
  • 13:49 - 13:54
    you have an offering
    of all the existing properties
  • 13:54 - 13:55
    that match what you're saying.
  • 13:55 - 13:58
    So we do this search completion
    so that you can adopt that.
  • 13:58 - 14:00
    That promotes normalization.
  • 14:00 - 14:04
    The core of the properties
    has been agreed by the community
  • 14:04 - 14:06
    so we're really extending that core.
  • 14:06 - 14:09
    And that core is very important
    because it gives structure
  • 14:09 - 14:11
    to all the extensions.
  • 14:11 - 14:14
    We engage the community
    through many different ways.
  • 14:14 - 14:17
    We had one face-to-face meeting
    at the beginning
  • 14:17 - 14:22
    and after about a year and a half,
    we do have a new standard,
  • 14:22 - 14:25
    and a new way for them
    to continue to evolve that standard.
  • 14:25 - 14:31
    They have editors, very much
    in the Wikipedia style
  • 14:31 - 14:32
    of editorial boards.
  • 14:32 - 14:34
    They have working groups
    for different types of data.
  • 14:34 - 14:36
    They do polls with the community,
  • 14:36 - 14:41
    and they have pretty nice engagement
    of the community at large,
  • 14:41 - 14:44
    even if they've never visited our Wiki.
  • 14:44 - 14:46
    The metadata evolves
  • 14:46 - 14:49
    so what we do is that people annotate
    their datasets,
  • 14:49 - 14:52
    then the schema evolves,
    the properties evolve
  • 14:52 - 14:55
    and we have an entire infrastructure
    and mechanisms
  • 14:55 - 15:00
    to re-annotate the datasets
    with the new structure of the ontology
  • 15:00 - 15:02
    and the new properties.
  • 15:02 - 15:05
    This is described in the paper.
    I won't go into the details.
  • 15:05 - 15:08
    But I think that
    having that kind of capability
  • 15:08 - 15:10
    in Wikibase would be really interesting.
  • 15:10 - 15:14
    We basically extended
    Semantic Media Wiki and Media Wiki
  • 15:14 - 15:16
    to create our own infrastructure.
  • 15:16 - 15:19
    I think a lot of this is now something
    that we find in Wikibase,
  • 15:19 - 15:21
    but this is older than that.
  • 15:21 - 15:25
    And in general, we have many projects
    where we look at crowdsourcing
  • 15:25 - 15:30
    not just descriptions of datasets
    but also descriptions of hydrology models,
  • 15:30 - 15:34
    descriptions of multi-step
    data analytic workflows
  • 15:34 - 15:36
    and many other things in the sciences.
  • 15:36 - 15:43
    So we are also interested in including
    in Wikidata additional things
  • 15:43 - 15:46
    that are not just datasets or entities
  • 15:46 - 15:49
    but also other things
    that have to do with science.
  • 15:49 - 15:54
    I think Geosciences are more complex
    in this sense than Biology, for example.
  • 15:55 - 15:56
    That's it.
  • 15:57 - 15:58
    Thank you.
    (audience clapping)
  • 16:02 - 16:04
    - Do I have time for questions?
    - Yes.
  • 16:04 - 16:07
    (moderator) We have time
    for just a couple of short questions.
  • 16:08 - 16:11
    When answering,
    can go back to the microphone?
  • 16:13 - 16:15
    - Yes.
    - Hopefully, yeah.
  • 16:21 - 16:25
    (audience 1) Does the structure allow
    tabular datasets to be described
  • 16:25 - 16:27
    and can you talk a bit about that?
  • 16:27 - 16:33
    Yes. So the properties of the datasets
    talk more about who collected them,
  • 16:33 - 16:37
    what kind of data was collected,
    what kind of sample it was,
  • 16:37 - 16:40
    and then there's a separate standard
    which is called "lipid"
  • 16:40 - 16:43
    that's complementary and mapped
    to the properties
  • 16:43 - 16:47
    that describes the format
    of the actual files
  • 16:47 - 16:49
    and the actual structure of the data.
  • 16:49 - 16:54
    So, you're right that there's both,
    "how do I find data about x"
  • 16:54 - 16:56
    but also, "Now, how do I use it?
  • 16:56 - 17:00
    How do I know where
    the temperature that I'm looking for
  • 17:00 - 17:03
    is actually in the file?"
  • 17:04 - 17:05
    (moderator) This will be the last.
  • 17:07 - 17:09
    (audience 2) I'll have
    to make it relevant.
  • 17:10 - 17:16
    So, you have shown this process
    of how users can suggest
  • 17:16 - 17:19
    or like actually already put in
    properties,
  • 17:19 - 17:23
    and I didn't fully understand
    how this thing works,
  • 17:23 - 17:24
    or what's the process behind it.
  • 17:24 - 17:28
    Is there some kind of
    folksonomy approach--obviously--
  • 17:28 - 17:33
    but how is it promoted
    into the core vocabulary
  • 17:33 - 17:36
    if something is promoted?
  • 17:36 - 17:38
    Yes, yes. It is.
  • 17:38 - 17:42
    So what we do is we have a core ontology
    and the initial one was actually
  • 17:42 - 17:46
    very thoughtfully put together
    through a lot of discussion
  • 17:46 - 17:48
    by very few people.
  • 17:48 - 17:51
    And then the idea was
    the whole community can extend that
  • 17:51 - 17:53
    or propose changes to that.
  • 17:53 - 17:57
    So, as they are describing datasets,
    they can add new properties
  • 17:57 - 18:00
    and those become "crowd properties."
  • 18:00 - 18:03
    And every now and then,
    the Editorial Committee
  • 18:03 - 18:04
    looks at all of those properties,
  • 18:04 - 18:08
    the working groups look at all of those
    crowd properties,
  • 18:08 - 18:12
    and decide whether to incorporate them
    into the main ontology.
  • 18:12 - 18:16
    So it could be because they're used
    for a lot of dataset descriptions.
  • 18:16 - 18:19
    It could be because
    they are proposed by somebody
  • 18:19 - 18:23
    and they're found to be really interesting
    or key, or uncontroversial.
  • 18:23 - 18:30
    So there's an entire editorial process
    to incorporate those new crowd properties
  • 18:30 - 18:32
    or the folksonomy part of it,
  • 18:32 - 18:36
    but they are really built around the core
    of the ontology.
  • 18:36 - 18:40
    The core ontology then grows
    with more crowd properties
  • 18:40 - 18:44
    and then people propose
    additional crowd properties again.
  • 18:44 - 18:47
    So we've gone through a couple
    of these iterations
  • 18:47 - 18:51
    of rolling out a new core,
    and then extending it,
  • 18:51 - 18:56
    and then rolling out a new core
    and then extending it.
  • 18:56 - 18:58
    - (audience 2) Great. Thank you.
    - Thanks.
  • 18:58 - 19:00
    (moderator) Thank you.
    (audience applauding)
  • 19:02 - 19:04
    (moderator) Thank you, Yolanda.
  • 19:04 - 19:07
    And now we have Adam Shorn
    with "Something About Wikibase,"
  • 19:08 - 19:09
    according to the title.
  • 19:10 - 19:13
    Uh... where's the internet? There it is.
  • 19:13 - 19:19
    So, I'm going to do a live demo,
    which is probably a bad idea
  • 19:19 - 19:21
    but I'm going to try and do it
    as the birthday present later
  • 19:21 - 19:24
    so I figure I might as well try it here.
  • 19:24 - 19:27
    And I also have some notes on my phone
    because I have no slides.
  • 19:29 - 19:32
    So, two years ago,
    I made these Wikibase doc images
  • 19:32 - 19:34
    that quite a few people have tried out,
  • 19:34 - 19:38
    and even before then,
    I was working on another project,
  • 19:38 - 19:42
    which is kind of ready now,
    and here it is.
  • 19:44 - 19:47
    It's a website that allows you
    to instantly create a Wikibase
  • 19:47 - 19:49
    with a query service and quick statements,
  • 19:49 - 19:52
    without needing to know about
    any of the technical details,
  • 19:52 - 19:54
    without needing to manage
    any of them either.
  • 19:54 - 19:57
    There are still lots of features to go
    and there's still some bugs,
  • 19:57 - 19:59
    but here goes the demo.
  • 19:59 - 20:03
    Let me get my emails up ready...
    because I need them too...
  • 20:03 - 20:07
    Da da da... Stopwatch.
  • 20:07 - 20:08
    Okay.
  • 20:09 - 20:14
    So it's a simple as...
    at the moment it's locked down behind...
  • 20:14 - 20:16
    Oh no! German keyboard!
  • 20:16 - 20:19
    (audience laughing)
  • 20:23 - 20:24
    Foiled... okay.
  • 20:25 - 20:26
    Okay.
  • 20:27 - 20:28
    (audience continues to laugh)
  • 20:30 - 20:32
    Aha! Okay.
  • 20:33 - 20:35
    I'll remember that for later.
    (laughs)
  • 20:37 - 20:38
    Yes.
  • 20:39 - 20:41
    ♪ (humming) ♪
  • 20:41 - 20:45
    Oh my god... now it's American.
  • 20:54 - 20:56
    All you have to do is create an account...
  • 20:59 - 21:00
    da da da...
  • 21:01 - 21:02
    Click this button up here...
  • 21:02 - 21:06
    Come up with a name for Wiki--
    "Demo1"
  • 21:06 - 21:07
    "Demo1"
  • 21:08 - 21:09
    "Demo user"
  • 21:09 - 21:12
    Agree to the terms
    which don't really exist yet.
  • 21:12 - 21:14
    (audience laughing)
  • 21:15 - 21:18
    Click on this thing which isn't a link.
  • 21:22 - 21:24
    And then you have your Wikibase.
  • 21:24 - 21:27
    (audience cheers and claps)
  • 21:29 - 21:30
    Anmelden in German.
  • 21:30 - 21:35
    Demo... oh god! I'm learning lots about
    my demo later.
  • 21:36 - 21:40
    1-6-1-4-S-G...
  • 21:40 - 21:43
    - (audience 3) Y...
    - (Adam) It's random.
  • 21:43 - 21:45
    (audience laughing)
  • 21:46 - 21:48
    Oh, come on....
    (audience laughing)
  • 21:48 - 21:51
    Oh no. It's because this is a capital U...
  • 21:51 - 21:53
    (audience chattering)
  • 21:54 - 21:57
    6-1-4....
  • 21:57 - 22:01
    S-G-ENJ...
  • 22:02 - 22:04
    Is J... oh no. That's... oh yeah. Okay.
  • 22:04 - 22:06
    I'm really... I'm gonna have to look
    at the laptop
  • 22:06 - 22:08
    that I'm doing this on later.
  • 22:08 - 22:09
    Cool...
  • 22:11 - 22:14
    Da da da da da...
  • 22:15 - 22:17
    Maybe I should have some things
    in my clipboard ready.
  • 22:18 - 22:19
    Okay, so now I'm logged in.
  • 22:23 - 22:25
    Oh... keyboards.
  • 22:28 - 22:30
    So you can go and create an item...
  • 22:36 - 22:39
    Yeah, maybe I should make a video.
    It might be easier.
  • 22:39 - 22:42
    So, yeah. You can make items,
    you have quick statements here
  • 22:42 - 22:44
    that have... oh... it is all in German.
  • 22:44 - 22:45
    (audience laughing)
  • 22:45 - 22:46
    (sighs)
  • 22:47 - 22:49
    Oh, log in? Log in?
  • 22:50 - 22:52
    It has... Oh, set up ready.
  • 22:52 - 22:53
    Da da da...
  • 22:56 - 22:58
    It's as easy as...
  • 22:59 - 23:01
    I learned how to use
    Quick Statements yesterday...
  • 23:01 - 23:03
    that's what I know how to do.
  • 23:05 - 23:07
    I can then go back to the Wiki...
  • 23:08 - 23:10
    We can go and see in Recent Changes
  • 23:10 - 23:12
    that there are now two items,
    the one that I made
  • 23:12 - 23:14
    and the one from Quick Statements...
  • 23:14 - 23:15
    and then you go to Quick...
  • 23:15 - 23:17
    ♪ (hums a tune) ♪
  • 23:18 - 23:19
    Stop...no...
  • 23:19 - 23:20
    No...
  • 23:20 - 23:22
    (audience laughing)
  • 23:28 - 23:30
    Oh god...
  • 23:30 - 23:32
    I'm glad I tried this out in advance.
  • 23:33 - 23:36
    There you go.
    And the query service is updated.
  • 23:36 - 23:38
    (audience clapping)
  • 23:42 - 23:45
    And the idea of this is it'll allow
    people to try out Wikibases.
  • 23:45 - 23:48
    Hopefully, it'll even be able
    to allow people to...
  • 23:49 - 23:51
    have their real Wikibases here.
  • 23:51 - 23:54
    At the moment you can create
    as many as you want
  • 23:54 - 23:56
    and they all just appear
    in this lovely list.
  • 23:56 - 23:59
    As I said, there's lots of bugs
    but it's all super quick.
  • 24:00 - 24:03
    Exactly how this is going to continue
    in the future, we don't know yet
  • 24:03 - 24:06
    because I only finished writing this
    in the last few days.
  • 24:06 - 24:09
    It's currently behind an invitation code
    so that if you want to come try it out,
  • 24:09 - 24:11
    come and talk to me.
  • 24:12 - 24:16
    And if you have any other comments
    or thoughts, let me know.
  • 24:16 - 24:20
    Oh, three minutes...40. That's...
    That's not that bad.
  • 24:20 - 24:21
    Thanks.
  • 24:21 - 24:23
    (audience clapping)
  • 24:28 - 24:30
    Any questions?
  • 24:31 - 24:36
    (audience 5) Does the Quick Statements
    and the Query Service
  • 24:36 - 24:39
    are automatically updated?
  • 24:40 - 24:42
    Yes. So the idea is that
    there will be somebody,
  • 24:42 - 24:44
    at the moment, me,
  • 24:44 - 24:45
    maintaining all of the horrible stuff
  • 24:45 - 24:47
    that you don't have to behind the scenes.
  • 24:48 - 24:50
    So kind of think of it like GitHub.com,
  • 24:50 - 24:54
    but you don't have to know anything
    about Git to use it. It's just all there.
  • 24:55 - 24:57
    - [inaudible]
    - Yeah, we'll get that.
  • 24:57 - 25:00
    But any of those
    big hosted solution things.
  • 25:01 - 25:03
    - (audience 6) A feature request.
    - Yes.
  • 25:03 - 25:05
    Is there any-- In Scope
  • 25:05 - 25:10
    do you have plans on making it
    so you can easily import existing...
  • 25:10 - 25:13
    - Wikidata...
    - I have loads of plans.
  • 25:13 - 25:15
    Like I want there to be a button
    where you can just import
  • 25:15 - 25:17
    another whole Wikibase and all of--yeah.
  • 25:17 - 25:21
    There will, in the future list
    that's really long. Yeah.
  • 25:24 - 25:28
    (audience 7) I understand that it's...
    you want to make it user-friendly
  • 25:28 - 25:32
    but if I want to access
    to the machine itself, can I do that?
  • 25:32 - 25:35
    Nope.
    (audience laughing)
  • 25:37 - 25:41
    So again, like, in the longer term future,
    there are possib...
  • 25:41 - 25:44
    Everything's possible,
    but at the moment, no.
  • 25:45 - 25:50
    (audience 8) Two questions.
    Is there a plan to have export tools
  • 25:50 - 25:53
    so that you can export it
    to your own Wikibase maybe at some point?
  • 25:53 - 25:54
    - Yes.
    - Great.
  • 25:54 - 25:56
    And is this a business?
  • 25:56 - 25:58
    I have no idea.
    (audience laughing)
  • 26:00 - 26:02
    Not currently.
  • 26:06 - 26:08
    (audience 9) What if I stop
    using it tomorrow,
  • 26:08 - 26:11
    how long will the data be there?
  • 26:11 - 26:15
    So my plan was at the end of WikidataCon
    I was going to delete all of the data
  • 26:15 - 26:18
    and there's a Wikibase Workshop
    on a Sunday,
  • 26:18 - 26:22
    and we will maybe be using this
    for the Wikibase workshop
  • 26:22 - 26:24
    so that everyone can have
    their own Wikibase.
  • 26:24 - 26:27
    And then, from that point,
    I probably won't be deleting the data
  • 26:27 - 26:29
    so it will all just stay there.
  • 26:32 - 26:33
    (moderator) Question.
  • 26:35 - 26:36
    (audience 10) It's two minutes...
  • 26:36 - 26:40
    Alright, fine. I'll allow two more
    questions if you talk quickly.
  • 26:40 - 26:42
    (audience laughing)
  • 26:47 - 26:50
    - Alright, good people.
    - Thank you, Adam.
  • 26:50 - 26:52
    Thank you for letting me test
    my demo... I mean...
  • 26:52 - 26:55
    I'm going to do it different.
    (audience clapping)
  • 27:00 - 27:01
    (moderator) Thank you.
  • 27:01 - 27:04
    Now we have Dennis Diefenbach
    presenting Q Answer.
  • 27:04 - 27:08
    Hello, I'm Dennis Diefenbach,
    I would like to present Q-Answer
  • 27:08 - 27:11
    which is a question-answering system
    on top of Wikidata.
  • 27:11 - 27:16
    So, what we need are some questions
    and this is the interface of QAnswer.
  • 27:16 - 27:23
    For example, where is WikidataCon?
  • 27:24 - 27:26
    Alright, I think it's written like this.
  • 27:27 - 27:32
    2019... And we get this response
    which is Berlin.
  • 27:32 - 27:38
    So, other questions. For example,
    "When did Wikidata start?"
  • 27:38 - 27:42
    It started the 30 October 2012
    so it's birthday is approaching.
  • 27:44 - 27:48
    It is 6 years old,
    so it will be their 7th birthday.
  • 27:49 - 27:52
    Who is developing Wikidata?
  • 27:52 - 27:54
    The Wikimedia Foundation
    and Wikimedia Deutschland,
  • 27:54 - 27:56
    so thank you very much to them.
  • 27:57 - 28:03
    Something like museums in Berlin...
    I don't know why this is not so...
  • 28:05 - 28:08
    Only one museum... no, yeah, a few more.
  • 28:09 - 28:11
    So, when you ask something like this,
  • 28:11 - 28:14
    we allow the user
    to explore the information
  • 28:14 - 28:16
    with different aggregations.
  • 28:16 - 28:19
    For example,
    if there are many geo coordinates
  • 28:19 - 28:21
    attached to the entities,
    we will display a map.
  • 28:21 - 28:26
    If there are many images attached to them,
    we will display the images,
  • 28:26 - 28:29
    and otherwise there is a list
    where you can explore
  • 28:29 - 28:31
    the different entities.
  • 28:33 - 28:36
    You can ask something like
    "Who is the mayor of Berlin,"
  • 28:37 - 28:40
    "Give me politicians born in Berlin,"
    and things like this.
  • 28:40 - 28:44
    So you can both ask keyword questions
    and foreign natural language questions.
  • 28:45 - 28:49
    The whole data is coming from Wikidata
  • 28:49 - 28:55
    so all entities which are in Wikidata
    are queryable by this service.
  • 28:56 - 28:59
    And the data is really all from Wikidata
  • 28:59 - 29:01
    in the sense,
    there are some Wikipedia snippets,
  • 29:01 - 29:05
    there are images from Wikimedia Commons,
  • 29:05 - 29:08
    but the rest is all Wikidata data.
  • 29:09 - 29:12
    We can do this in several languages.
    This is now in Chinese.
  • 29:12 - 29:15
    I don't know what is written there
    so do not ask me.
  • 29:15 - 29:20
    We are currently supporting this languages
    with more or less good quality
  • 29:20 - 29:22
    because... yeah.
  • 29:23 - 29:28
    So, how can this be useful
    for the Wikidata community?
  • 29:28 - 29:30
    I think there are different reasons.
  • 29:30 - 29:34
    First of all, this thing helps you
    to generate SPARQL queries
  • 29:34 - 29:37
    and I know there are even some workshops
    about how to use SPARQL.
  • 29:37 - 29:39
    It's not a language that everyone speaks.
  • 29:39 - 29:45
    So, if you ask something like
    "a philosopher born before 1908,"
  • 29:45 - 29:49
    to figure out, to construct
    a SPARQL query like this could be tricky,
  • 29:50 - 29:54
    In fact when you ask a question,
    we generate many SPARQL queries
  • 29:54 - 29:57
    and the first one is always the thing,
    the SPARQL query where we think
  • 29:57 - 29:59
    this is the good one.
  • 29:59 - 30:03
    So, if you ask your question
    and then you go on SPARQL list,
  • 30:03 - 30:06
    then there is this button
    for the Wikidata query service
  • 30:06 - 30:12
    and you have the SPARQL query right there
    and you will get the same result
  • 30:12 - 30:15
    as you would get in the interface.
  • 30:17 - 30:19
    Another thing where it could be useful for
  • 30:19 - 30:23
    is for finding missing
    contextual information.
  • 30:23 - 30:27
    For example, if you ask for actors
    in "The Lord of the Rings,"
  • 30:27 - 30:31
    most of these entities
    will have associated an image
  • 30:31 - 30:32
    but not all of them.
  • 30:32 - 30:38
    So here there is some missing metadata
    that could be added.
  • 30:38 - 30:40
    You could go to this entity at an image
  • 30:40 - 30:45
    and then see first
    that there is an image missing and so on.
  • 30:46 - 30:52
    Another thing is that you could find
    schema issues.
  • 30:52 - 30:55
    For example, if you ask
    "books by Andrea Camilleri,"
  • 30:55 - 30:58
    which is a famous Italian writer,
  • 30:58 - 31:00
    you would currently get
    these three books.
  • 31:00 - 31:03
    But he wrote many more.
    He wrote more than 50.
  • 31:03 - 31:06
    And so the question is,
    are they not in Wikidata
  • 31:06 - 31:10
    or is maybe my knowledge
    not correctly currently like it is.
  • 31:10 - 31:13
    And in this case, I know
    there is another book from him,
  • 31:13 - 31:15
    which is "Un mese con Montalbano."
  • 31:15 - 31:18
    It has only an Italian label
    so you can only search it in Italian.
  • 31:18 - 31:22
    And if you go to this entity,
    you will say that he has written it.
  • 31:22 - 31:28
    It's a short story by Andrea Camilleri
    and it's an instance of literary work,
  • 31:28 - 31:29
    but it's not instance of book
  • 31:29 - 31:31
    so that's the reason why
    it doesn't appear.
  • 31:31 - 31:36
    This is a way to track
    where things are missing
  • 31:36 - 31:37
    in the Wikidata model
  • 31:37 - 31:40
    not as you would expect.
  • 31:41 - 31:43
    Another reason is just to have fun.
  • 31:44 - 31:48
    I imagine that many of you added
    many Wikidata entities
  • 31:48 - 31:51
    so just search for the ones
    that you care most
  • 31:51 - 31:53
    or you have edited yourself.
  • 31:53 - 31:57
    So in this case, who developed
    QAnswer, and that's it.
  • 31:57 - 32:00
    For any other questions,
    go to www.QAnswer.eu/qa
  • 32:00 - 32:04
    and hopefully we'll find
    an answer for you.
  • 32:04 - 32:06
    (audience clapping)
  • 32:14 - 32:17
    - Sorry.
    - I'm just the dumbest person here.
  • 32:18 - 32:23
    (audience 11) So I want to know
    how is this kind of agnostic
  • 32:23 - 32:25
    to Wikibase instance,
  • 32:25 - 32:29
    or has it been tied to the exact
    like property numbers
  • 32:29 - 32:31
    and things in Wikidata?
  • 32:31 - 32:33
    Has it learned in some way
    or how was it set up?
  • 32:33 - 32:36
    There is training data
    and we rely on training data
  • 32:36 - 32:41
    and this is also most of the cases
    why you will not get good resutls.
  • 32:41 - 32:45
    But we're training the system
    by the simple yes and no answer.
  • 32:45 - 32:49
    When you ask a question,
    and we ask always for feedback, yes or no,
  • 32:49 - 32:52
    and this feedback is used by
    the machine learning algorithm.
  • 32:52 - 32:54
    This is where machine learning
    comes into play.
  • 32:54 - 32:59
    But basically, we put up separate
    Wikibase instances
  • 32:59 - 33:00
    and we can plug this in.
  • 33:00 - 33:04
    In fact, the system is agnostic
    in the sense that it only wants RDF.
  • 33:04 - 33:07
    And RDF, you have in each Wikibase,
  • 33:07 - 33:08
    there are some few configurations
  • 33:08 - 33:10
    but you can have this on top
    of any Wikibase.
  • 33:12 - 33:13
    (audience 11) Awesome.
  • 33:24 - 33:27
    (audience 12) You mentioned that
    it's being trained by yes/no answers.
  • 33:27 - 33:33
    So I guess this is assuming that
    the Wikidata instance is free of errors
  • 33:33 - 33:34
    or is it also...?
  • 33:34 - 33:37
    You assume that the Wikidata instances...
  • 33:37 - 33:41
    (audience 12) I guess I'm asking, like,
    are you distinguishing
  • 33:41 - 33:46
    between source level errors
    or misunderstanding the question
  • 33:46 - 33:51
    versus a bad mapping, etc.?
  • 33:52 - 33:55
    Generally, we assume that the data
    in Wikidata is true.
  • 33:55 - 33:59
    So if you click "no"
    and the data in Wikidata would be false,
  • 33:59 - 34:03
    then yeah... we would not catch
    this difference.
  • 34:03 - 34:05
    But sincerely, Wikidata quality
    is very good,
  • 34:05 - 34:08
    so I rarely have had this problem.
  • 34:17 - 34:22
    (audience 12) Is this data available
    as a dataset by any chance, sir?
  • 34:22 - 34:27
    - What is... direct service?
    - The... dataset of...
  • 34:27 - 34:31
    "is this answer correct
    versus the query versus the answer?"
  • 34:31 - 34:33
    Is that something you're publishing
    as part of this?
  • 34:33 - 34:38
    - The training data that you've...
    - We published the training data.
  • 34:38 - 34:43
    We published some old training data
    but no, just a--
  • 34:45 - 34:47
    There is a question there.
    I don't know if we have still time.
  • 34:51 - 34:55
    (audience 13) Maybe I just missed this
    but is it running on a live,
  • 34:55 - 34:57
    like the Live Query Service,
  • 34:57 - 34:59
    or is it running on
    some static dump you loaded
  • 34:59 - 35:02
    or where is the data source
    for Wikidata?
  • 35:02 - 35:07
    Yes. The problem is
    to apply this technology,
  • 35:07 - 35:08
    you need a local dump.
  • 35:08 - 35:11
    Because we do not rely only
    on the SPARQL end point,
  • 35:11 - 35:13
    we rely on special indexes.
  • 35:13 - 35:16
    So, we are currently loading
    the Wikidata dump.
  • 35:16 - 35:19
    We are updating this every two weeks.
  • 35:19 - 35:21
    We would like to do it more often,
  • 35:21 - 35:24
    in fact we would like to get the difs
    for each day, for example,
  • 35:24 - 35:25
    to put them in our index.
  • 35:25 - 35:29
    But unfortunately, right now,
    the Wikidata dumps are released
  • 35:29 - 35:32
    only once every week.
  • 35:32 - 35:35
    So, we cannot be faster than that
    and we also need some time
  • 35:35 - 35:39
    to re-index the data,
    so it takes one or two days.
  • 35:39 - 35:42
    So we are always behind. Yeah.
  • 35:48 - 35:50
    (moderator) Any more?
  • 35:50 - 35:53
    - Okay, thank you very much.
    - Thank you all very much.
  • 35:54 - 35:55
    (audience clapping)
  • 35:57 - 36:00
    (moderator) And now last, we have
    Eugene Alvin Villar,
  • 36:00 - 36:02
    talking about Panandâ.
  • 36:11 - 36:13
    Good afternoon,
    my name is Eugene Alvin Villar
  • 36:13 - 36:15
    and I'm from the Philippines,
    and I'll be talking about Panandâ:
  • 36:15 - 36:18
    a mobile app powered by Wikidata.
  • 36:19 - 36:22
    This is a follow-up to my lightning talk
    that I presented two years ago
  • 36:22 - 36:25
    at WikidataCon 2017
    together with Carlo Moskito.
  • 36:25 - 36:27
    You can download the slides
  • 36:27 - 36:29
    and there's a link
    to that presentation there.
  • 36:29 - 36:31
    I'll give you a bit of a background.
  • 36:31 - 36:33
    Wiki Society of the Philippines,
    formerly, Wikimedia Philippines,
  • 36:33 - 36:37
    had a series of projects related
    to Philippine heritage and history.
  • 36:37 - 36:42
    So we have the usual photo contests,
    Wikipedia Takes Manila,
  • 36:42 - 36:43
    Wiki Loves Monuments,
  • 36:43 - 36:47
    and then our media project
    was Cultural Heritage Mapping Project
  • 36:47 - 36:49
    back in 2014-2015.
  • 36:50 - 36:53
    In that project, we trained volunteers
    to edit articles
  • 36:53 - 36:54
    related to cultural heritage.
  • 36:55 - 36:59
    This is our biggest,
    and most successful project that we had.
  • 36:59 - 37:03
    794 articles were created or improved,
    including 37 "Did You Knows"
  • 37:03 - 37:05
    and 4 "Good Articles,"
  • 37:05 - 37:09
    and more than 5,000 images were uploaded
    to Commons.
  • 37:09 - 37:11
    As a result of that, we then launched
  • 37:11 - 37:14
    the Encyclopedia
    of Philippine Heritage program
  • 37:14 - 37:18
    in order to expand the scope
    and also include Wikidata in the scope.
  • 37:18 - 37:22
    Here's the Core Team: myself,
    Carlo and Roel.
  • 37:22 - 37:27
    Our first pilot project was to document
    the country's historical markers
  • 37:27 - 37:29
    in Wikidata and Commons,
  • 37:29 - 37:34
    starting with those created by
    our historical national agency, NHCP.
  • 37:34 - 37:39
    For example, they installed a marker
    for our national hero, here in Berlin,
  • 37:39 - 37:41
    so there's no Wikidata page
    for that marker
  • 37:41 - 37:45
    and a collection of photos of that marker
    in Commons.
  • 37:46 - 37:50
    Unfortunately, the government agency
    does not keep a good database
  • 37:50 - 37:53
    up-to-date or complete of their markers,
  • 37:53 - 37:58
    so we have to painstakingly input these
    to Wikidata manually.
  • 37:58 - 38:03
    After careful research and confirmation,
    here's a graph of the number of markers
  • 38:03 - 38:07
    that we've added to Wikidata over time,
    over the past three years.
  • 38:07 - 38:11
    And we've developed
    this Historical Markers Map web app
  • 38:11 - 38:15
    that lets users view
    these markers on a map,
  • 38:15 - 38:21
    so we can browse it as a list,
    view a good visualization of the markers
  • 38:21 - 38:23
    with information and inscriptions.
  • 38:23 - 38:29
    All of this is powered by Live Query
    from Wikidata Query Service.
  • 38:30 - 38:32
    There's the link
    if you want to play around with it.
  • 38:33 - 38:37
    And so we developed
    a mobile app for this one.
  • 38:37 - 38:42
    To better publicize our project,
    I developed the
    Panandâ
  • 38:42 - 38:45
    which is Tagalog for "marker",
    as an android app,
  • 38:45 - 38:48
    that was published back in 2018,
  • 38:48 - 38:54
    and I'll publish the IOS version
    sometime in the future, hopefully.
  • 38:55 - 38:58
    I'd like to demo the app
    but we have no time,
  • 38:58 - 39:01
    so here are some
    of the features of the app.
  • 39:01 - 39:05
    There's a Map and a List view,
    with text search,
  • 39:05 - 39:07
    so you can drill down as needed.
  • 39:07 - 39:10
    You can filter by region or by distance,
  • 39:10 - 39:12
    and whether you have marked
    these markers,
  • 39:12 - 39:15
    as either you have visited them
    or you'd like to bookmark them
  • 39:15 - 39:17
    for future visits.
  • 39:17 - 39:19
    Then you can use your GPS
    on your mobile phone
  • 39:19 - 39:22
    to use for distance filtering.
  • 39:22 - 39:27
    For example, if I want markers
    that are near me, you can do that.
  • 39:27 - 39:31
    And when you click on the Details page,
    you can see the same thing,
  • 39:31 - 39:36
    photos from Commons,
    inscription about the marker,
  • 39:36 - 39:40
    how to find the marker,
    its location and address, etc.
  • 39:42 - 39:46
    And one thing that's unique for this app
    is you can, again, visit
  • 39:46 - 39:50
    or put a bookmark of these,
    so on the map or on the list,
  • 39:50 - 39:52
    or on the Details page,
  • 39:52 - 39:55
    you can just tap on those buttons
    and say that you've visited them,
  • 39:55 - 39:59
    or you'd like to bookmark them
    for future visits.
  • 39:59 - 40:04
    And my app has been covered by the press
    and given recognition,
  • 40:04 - 40:07
    so plenty of local press articles.
  • 40:07 - 40:11
    Recently, it was selected
    as one of the Top 5 finalists
  • 40:11 - 40:15
    for the Android Masters competition
    in the App for Social Good category.
  • 40:15 - 40:17
    The final event will be next month.
  • 40:17 - 40:19
    Hopefully, we'll win.
  • 40:20 - 40:22
    Okay, so some behind the scenes.
  • 40:22 - 40:25
    How did I develop this app?
  • 40:25 - 40:29
    Panandâ is actually a hybrid app,
    it's not native.
  • 40:29 - 40:31
    Basically it's just a web app
    packaged as a mobile app
  • 40:31 - 40:33
    using Apache Cordova.
  • 40:33 - 40:34
    That reduces development time
  • 40:34 - 40:36
    because I don't have to learn
    a different language.
  • 40:36 - 40:38
    I know JavaScript, HTML.
  • 40:38 - 40:42
    It's cross-platform, allows code reuse
    from the Historical Markers Map.
  • 40:42 - 40:46
    And the app is also FIN Open Source.
    under the MIT license.
  • 40:46 - 40:49
    So there's the GitHub repository
    over there.
  • 40:50 - 40:54
    The challenge is
    the apps data is not live.
  • 40:55 - 40:57
    Because if you query the data live,
  • 40:57 - 41:01
    it means you pulling around half
    a megabyte of compressed JSON every time
  • 41:01 - 41:04
    which is not friendly
    for those on mobile data,
  • 41:04 - 41:07
    incurs too much delay when starting
    the app,
  • 41:07 - 41:13
    and if there are any errors in Wikidata,
    that may result in poor user experience.
  • 41:14 - 41:18
    So instead, what I did was
    the app is updated every few months
  • 41:18 - 41:20
    with fresh data, compiled using
    a Perl script
  • 41:20 - 41:23
    that queries Wikidata Query Service,
  • 41:23 - 41:26
    and this script also does
    some data validation
  • 41:26 - 41:31
    to highlight consistency or schema errors,
    so that allows fixes before updates
  • 41:31 - 41:35
    in order to provide a good experience
    for the mobile user.
  • 41:35 - 41:39
    And here's the... if you're tech-oriented,
    here's the more or less,
  • 41:39 - 41:42
    the technologies that I'm using.
  • 41:42 - 41:44
    So a bunch of JavaScript libraries.
  • 41:44 - 41:46
    Here's the first script
    that queries Wikidata,
  • 41:46 - 41:49
    some Cordova plug-ins,
  • 41:49 - 41:53
    and building it using Cordova
    and then publishing this app.
  • 41:54 - 41:56
    And that's it.
  • 41:56 - 41:58
    (audience clapping)
  • 42:02 - 42:04
    (moderator) I hope you win.
    Alright, questions.
  • 42:16 - 42:18
    (audience 14) Sorry if I missed this.
  • 42:18 - 42:21
    Are you opening your code
    so the people can adapt your app
  • 42:21 - 42:25
    and do it for other cities?
  • 42:25 - 42:29
    Yes, as I've mentioned,
    the app is free and open source,
  • 42:29 - 42:31
    - (audience 14) But where is it?
    - There's the GitHub repository.
  • 42:31 - 42:34
    You can download the slides,
    and there's a link
  • 42:34 - 42:37
    in one of the previous slides
    to the repository.
  • 42:37 - 42:39
    (audience 14) Okay. Can you put it?
  • 42:42 - 42:44
    Yeah, at the bottom.
  • 42:47 - 42:49
    (audience 15) Hi. Sorry, maybe
    I also missed this,
  • 42:49 - 42:52
    but how do you check for a schema errors?
  • 42:53 - 42:56
    Basically, we have a Wikiproject
    on Wikidata,
  • 42:56 - 43:02
    so we try to put the other guidelines
    on how to model these markers correctly.
  • 43:02 - 43:05
    Although it's not updated right now.
  • 43:06 - 43:09
    As far as I know, we're the only country
  • 43:09 - 43:13
    that's currently modeling these
    in Wikidata.
  • 43:14 - 43:20
    There's also an effort
    to add [inaudible]
  • 43:20 - 43:22
    in Wikidata,
  • 43:22 - 43:26
    but I think that's
    a different thing altogether.
  • 43:34 - 43:36
    (audience 16) So I guess this may be part
  • 43:36 - 43:38
    of this Wikiproject you just described,
  • 43:38 - 43:43
    but for the consistency checks,
    have you considered moving those
  • 43:43 - 43:47
    into like complex schema constraints
    that then can be flagged
  • 43:47 - 43:51
    on the Wikidata side for
    what there is to fix on there?
  • 43:53 - 43:56
    I'm actually interested in seeing
    if I can do, for example,
  • 43:56 - 44:00
    shape expressions, so that, yeah,
    we can do those things.
  • 44:04 - 44:07
    (moderator) At this point,
    we have quite a few minutes left.
  • 44:07 - 44:09
    The speakers did very well,
    so if Erica is okay with it,
  • 44:09 - 44:11
    I'm also going to allow
    some time for questions,
  • 44:11 - 44:13
    still about this presentation,
    but also about Mbabel,
  • 44:13 - 44:15
    if anyone wants to jump in
    with something there,
  • 44:15 - 44:17
    either presentation is fair game.
  • 44:23 - 44:26
    Unless like me, you're all so dazzled
    that you just want to go to snacks
  • 44:26 - 44:28
    and think about it.
    (audience giggles)
  • 44:29 - 44:31
    - (moderator) You know...
    - Yeah.
  • 44:32 - 44:34
    (audience 17) I will always have
    questions about everything.
  • 44:34 - 44:38
    So, I came in late for the Mbabel tool.
  • 44:38 - 44:40
    But I was looking through
    and I saw there's a number of templates,
  • 44:40 - 44:43
    and I was wondering
    if there's a place to contribute
  • 44:43 - 44:46
    to adding more templates
    for different types
  • 44:46 - 44:48
    or different languages and the like?
  • 44:50 - 44:54
    (Erica) So for now, we're developing
    those narrative templates
  • 44:54 - 44:56
    on Portuguese Wikipedia.
  • 44:56 - 44:58
    I can show you if you like.
  • 44:58 - 45:02
    We're inserting those templates
    on English Wikipedia too.
  • 45:02 - 45:07
    It's not complicated to do
    but we have to expand for other languages.
  • 45:07 - 45:08
    - French?
    - French.
  • 45:08 - 45:10
    - Yes.
    - French and German already have.
  • 45:10 - 45:11
    (laughing)
  • 45:12 - 45:13
    Yeah.
  • 45:16 - 45:18
    (inaudible chatter)
  • 45:22 - 45:24
    (audience 18) I also have a question
    about Mbabel,
  • 45:24 - 45:28
    which is, is this really just templates?
  • 45:28 - 45:34
    Is this based on the LUA scripting?
    Is that all? Wow. Okay.
  • 45:34 - 45:37
    Yeah, so it's very deployable. Okay. Cool.
  • 45:38 - 45:40
    (moderator) Just to catch that
    for the live stream,
  • 45:40 - 45:43
    the answer was an emphatic nod
    of the head, and a yes.
  • 45:43 - 45:45
    (audience laughing)
  • 45:45 - 45:47
    - (Erica) Super simple.
    - (moderator) Super simple.
  • 45:48 - 45:50
    (audience 19) Yeah.
    I would also like to ask.
  • 45:50 - 45:53
    Sorry I haven't delved
    into Mbabel earlier.
  • 45:53 - 45:57
    I'm wondering, you're working also
    with the links, the red links.
  • 45:57 - 46:00
    Are you adding some code there?
  • 46:04 - 46:08
    - (Erica) For the lists?
    - Wherever the link comes from...
  • 46:08 - 46:12
    (audience 19) The architecture.
    Maybe I will have to look into it.
  • 46:12 - 46:13
    (Erica) I'll show you later.
  • 46:21 - 46:23
    (moderator) Alright. You're all ready
    for snack break, I can tell.
  • 46:23 - 46:24
    So let's wrap it up.
  • 46:24 - 46:26
    But our kind speakers,
    I'm sure will stick around
  • 46:26 - 46:28
    if you have questions for them.
  • 46:28 - 46:31
    Please join me in giving... first of all
    we didn't give a round of applause yet.
  • 46:31 - 46:33
    I can tell you're interested in doing so.
  • 46:33 - 46:35
    (audience clapping)
Title:
cdn.media.ccc.de/.../wikidatacon2019-19-eng-Lightning_talks_2_hd.mp4
Video Language:
English
Duration:
46:40

English subtitles

Revisions