Skip to content
Snippets Groups Projects
Unverified Commit c1beff9c authored by Lyudmila Vaseva's avatar Lyudmila Vaseva
Browse files

[Defence] Reorganise slides

parent 3e41ef4a
No related branches found
No related tags found
No related merge requests found
...@@ -22,19 +22,20 @@ Gandalf ...@@ -22,19 +22,20 @@ Gandalf
--- ---
- Motivation ## Motivation
why? decription of state of the art why is it relevant? decription of state of the art
what is the question? Q1-Q4 what is the question? Q1-Q4
- State of scientific research (infographic) * wikipedia is a complex socio technical system
* we have the luck it's "open", so we can study it and learn how things work and
Beiträge der current work apply the insights to less open systems
- Findings for Q1-Q4 : check 4 quadrants * "anyone can edit": increasing popularity in 2006; -> increasing need for
- Directions for future studies quality control
- MR is merged: abuse_filter_history table is now available * edit filters a one particular mechanism for quality control among several,
and one previously unstudied
- Evaluation: what would I do differently?/what went not so well * seem relevant to understand how and what they do since they make it possible
- Start writing after getting hold of all the data to disallow edits (and other actions, but above all edits) from the very
beginning
--- ---
...@@ -53,6 +54,14 @@ number, etc.)? ...@@ -53,6 +54,14 @@ number, etc.)?
--- ---
## Approach / Analysis Sources
* Literature
* Documentation
* Data (Edit filter patterns, DB log table)
---
## State of the Scientific Literature ## State of the Scientific Literature
<img src="images/funnel-no-filters-new.png" class="stretch" height="500" alt="Funnel diagramm of all vandal fighting mechanisms (no filters)"> <img src="images/funnel-no-filters-new.png" class="stretch" height="500" alt="Funnel diagramm of all vandal fighting mechanisms (no filters)">
...@@ -64,30 +73,104 @@ number, etc.)? ...@@ -64,30 +73,104 @@ number, etc.)?
## Q1: What is the role of edit filters among existing algorithmic quality-control mechanisms on Wikipedia (bots, semi-automated tools, ORES, humans)? ## Q1: What is the role of edit filters among existing algorithmic quality-control mechanisms on Wikipedia (bots, semi-automated tools, ORES, humans)?
* 1st mechanism activated to control quality (at the beginning of the funnel)
* historically: faster, by being a direct part of the core software: disallow
even before publishing
* can target malicious users directly without restricting everyone (<-> page
protection)
* introduced to take care of obvious but cumbersome to remove vandalism
* people fed up with bot introduction and development processes (poor quality, no tests, no
code available for revision in case of problems) (so came up with a new approach)
* filter allow more easily for collaboration
--- ---
## Q2: Edit filters are a classical rule-based system. Why are they still active today when more sophisticated ML approaches exist? ## Q2: Edit filters are a classical rule-based system. Why are they still active today when more sophisticated ML approaches exist?
* introduced before vandalism fighting ML systems came along (verify!); so they
were there first historically; still work well; don't touch a running system^^
* a gap was perceived in the existing system which was filled with filters
* in functionality: disallow cumbersome vandalism from the start
* in governance: bots are poorly tested, communication and updates are
difficult
* volunteer system: people do what they like and can (someone has experience
with this types of tech and implemented it that way)
* rule-based systems are more transparent and accountable
* and easier to work with (easier to add yet another rule than tweak paremeters
in an obscure ML based approach)
* allows for finer levels of control than ML: i.e. disallowing specific users
--- ---
## Q3: Which type of tasks do filters take over? ## Q3: Which type of tasks do filters take over?
* in total most filters are hidden: so implemented with the purpose of taking care of
cumbersome vandalism by specific malicious users
* vandalism/good faith/maintenance
* when a new problem emerges: when is a bot chosen and when a filter: depends
probably at least partially on who is handling it; TODO: ask people! (there
are bot operators who are also filter managers)
--- ---
## Q4: How have these tasks evolved over time (are they changes in the type, number, etc.)? ## Q4: How have these tasks evolved over time (are they changes in the type, number, etc.)?
--- * filter hit numbers are of the same magnitude as reverts (way higher than
initially expected)
* beginning: more good faith, later more vandalism hits (somewhat unexpected)
* surge in 2016 and a subsequently higher baseline in hit numbers
(explaination?)
* overall number of active filters stays the same (condition limit)
* most active filters of all times are quite stable through the years
--- ---
# Open Questions ## Open Questions / Directions for future studies
- MR is merged: abuse_filter_history table is now available
* How have edit filters's tasks evolved over time? : should be easier to look
into it with the abuse_filter_history
When a dump becomes available, an extensive investigation of filters' actions, creation and activation patterns, as well as patterns they have targeted over time will be possible.
* What proportion of quality control work do filters take over?: Filter hits can be systematically compared with the number of all edits and reverts via other quality control mechanisms.
* Is it possible to study the filter patterns in a more systematic fashion?
* What can be learnt from this?: For example, it has come to attention that $1/5$ of all active filters discriminate against new users via the \verb|!("confirmed" in user_groups)| pattern.
Are there other tendencies of interest?
* Is there a qualitative difference between the tasks/patterns of public and
hidden filters?: According to the guidelines for filter creation, general filters should be public while filters targeting particular users should be hidden. Is there something more to be learnt from an examination of hidden filters' patterns? Do they actually conform to the guidelines? %One will have to request access to them for research purposes, sign an NDA, etc.
* How are false positives handled?: Have filters been shut down regularly, because they matched more false positives than they had real value? Are there big amounts of false positives that corrupt the filters hit data and thus the interpretations offered by the current work?
* To implement a bot or to implement a filter?: An ethnographic inquiry into if an editor is simultaneously an edit filter manager and a bot operator when faced with a new problem, how do they decide which mechanism to employ for the solution?
* What are the repercussions on affected editors?
An ethnographic study of the consequences of edit filters for editors whose edits are filtered. Do they experience frustration or alienation? Do they understand what is going on? Or do they experience for example edit filters' warnings as helpful and appreciate the hints they have been given and use them to improve their collaboration?
* What are the differences between how filters are governed on EN Wikipedia
compared to other language versions?: Different Wikipedia language versions each have a local community behind them.
These communities vary, sometimes significantly, in their modes of organisation and values.
It would be very insightful to explore disparities between filter governance and the types of filters implemented between different language versions.
* Are edit filters a suitable mechanism for fighting harassment?: A disturbing rise in online personal attacks and harassment is observed in a variety of online spaces, including Wikipedia~\cite{Duggan2014}.
The Wikimedia Foundation sought to better understand harassment in their projects via a Harassment Survey conducted in 2015~\cite{Wikimedia:HarassmentSurvey}.
According to the edit filter noticeboard archives~\cite{Wikipedia:EditFilterNoticeboardHarassment}, there have been some attempts to combat harassment by means of filters.
The tool is also mentioned repeatedly in the timeline of Wikipedia's Community Health Initiative~\cite{Wikipedia:CommunityHealthInitiative} which seeks to reduce harassment and disruptive behaviour on Wikipedia.
An evaluation of its usefulness and success at this task would be really interesting.
* (How) has the notion of ``vandalism'' on Wikipedia evolved over time?: By comparing older and newer filters, or respectively updates in filter patterns, it could be investigated whether there has been a qualitative change in the interpretation of the ``vandalism'' notion on Wikipedia.
* What are the urgent situations in which edit filter managers are given the
freedom to act as they see fit and ignore best practices of filter adoption?: (i.e. switch on a filter in log only mode first and announce it on the notice board so others can have a look)? Who determines they are urgent? These cases should be scrutinised extra carefully since ``urgent situations'' have historically always been an excuse for cuts in civil liberties.
--- ---
## Current Limitations ## Current Limitations
Data
* Only EN Wikipedia * Only EN Wikipedia
* abuse_filter_history missing
* no access to hidden filters
Process
* manual filter classification only conducted by me * manual filter classification only conducted by me
* no ethnographic analysis; can answer valuable questions (i.e. bot vs filter?)
- Evaluation: what would I do differently?/what went not so well
- Start writing after getting hold of all the data
--- ---
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment