* Till now the whole inquiry is largely descriptive. It's fine the status quo is captured but then we should go a step further and ask "so what"? What do we have from that? Explain the data
* maybe we won't be able to explain a lot of it and we can open it further as interesting questions to be looked into by ethnographers
* think about what values we embed to what systems and how; --> Lessig
Difference bot/filter: filters are part of the "platform". (vgl also ~\cite{Geiger2014} and criticism towards the view of a hollistic platform)
They are a MediaWiki extension, which means they are run on official Wikimedia infrastructure. (vgl \cite{Geiger2014} and "bespoke code")
This makes them more robust and bestow them another kind of status.
Bots on the other hand are what Stuart Geiger calls "bespoke code": they are auxiliary programms developed, mantained and run by single community members, typically (at least historically?) not on Wikimedia's infrastructure, but instead on private computers or third party servers.
A key difference is also that while bots check already published edits which they eventually may decide to revert, filters are triggered before an edit ever published.
* another difference bots/filters, it's easier to ddos the bot infrastructure, than the filters: buy a cluster and edit till the revert table overflows
% Aim: I want to know why are there filters? How do they fit in the quality control ecosystem?
Distinction filters/Bots: what tasks are handled by bots and what by filters (and why)? What difference does it make for admins? For users whose edits are being targeted? %TODO: good question, but move to analysis, since we won't be able to answer this on grounds of literature review only
Q1 We wanted to improve our understanding of the role of filters in existing algorithmic quality-control mechanisms (bots, ORES, humans).
Q2 Which type of tasks do these filters take over in comparison to the other mechanisms? How these tasks evolve over time (are they changes in the type, number, etc.)?
Q3 Since filters are classical rule-based systems, what are suitable areas of application for such rule-based system in contrast to the other ML-based approaches.
Note:
* to answer the question about evolution over time, I really do need the abuse_filter_history table
* modify 3rd question to: why are regexes still there when we have ML; answering it most probably involves talking to people
* check what questions the first bot papers asked, may serve as inspiration
\begin{itemize}
\item Was sind die mit dieser Arbeit verfolgten Ziele? Welches Problem soll gelöst werden?
\item Eine Beschreibung der ersten Ideen, der vorgeschlagene Ansatz und die aktuell erreichten Resultate