diff --git a/thesis/6-Discussion.tex b/thesis/6-Discussion.tex
index a12919ecad02d8206fa416478fdbc61301cfc7cc..cbd82e9d07f1e9671b45d4d1a1b1aeb929d75902 100644
--- a/thesis/6-Discussion.tex
+++ b/thesis/6-Discussion.tex
@@ -14,86 +14,85 @@ In what follows, I go over each of them and summarise the findings.
 \section{Q1 What is the role of edit filters among existing quality-control mechanisms on Wikipedia (bots, semi-automated tools, ORES, humans)?}
 
 When edit filters were introduced in 2009, various other mechanisms that took care of quality control on Wikipedia had already been in place for some time.
-However, the community felt the need for an agent (mechanism, syn) preventing obvious but pervasive and difficult to clean up vandalism as early as possible.
-This was supposed to take workload off the other mechanisms along the quality control process (syn) (see figure~\ref{funnel}), especially off human editors who could then use their time more productively elsewhere, namely to check less obvious (syn) cases.
+However, the community felt the need for an instrument for preventing easy to recognise but pervasive and difficult to clean up vandalism as early as possible.
+This was supposed to take workload off the other mechanisms along the quality control process (see figure~\ref{fig:funnel-with-filters}), especially off human editors who could then use their time more productively elsewhere, namely to check less obvious cases.
 %TODO is there another important findind from chapter 4's conclusion that is missing here?
 
-It seems obvious/natural/... to compare the edit filters, being a competely automated mechanism, with bots.
+Both filters and bots are completely automated mechanisms, thus a comparison between the two seems reasonable.
 What did the filters accomplish differently?
 % before vs after
-A key distinction is that while bots check already published edits which they eventually may decide to revert, filters are triggered before an edit ever published.
+A key distinction is that while bots check already published edits which they may decide to eventually revert, filters are triggered before an edit ever published.
 One may argue that nowadays this is not a significant difference.
-Whether a disruptive edit is outright disallowed or caught and reverted 2 seconds after its publication by ClueBot NG doesn't have a tremendous impact on the readers:
-the vast majority of them will never see the edit either way.
-Still, there are various examples of hoaxes that didn't survive long on Wikipedia but the couple of seconds before they were reverted were sufficient for the corrupted version to be indexed by various/multiple/... news aggregators and search engines. %TODO find them!
+Whether a disruptive edit is outright disallowed or caught and reverted two seconds after its publication by ClueBot NG doesn't have a tremendous impact on the readers:
+The vast majority of them will never see the edit either way.
+Still, there are various examples of vandalism that didn't survive long on Wikipedia but the brief time before they were reverted was sufficient for hundreds of media outlets to report these as news~\cite{Elder2016}, which severely undermines the project's credibility.
 
-% The infrastructure question: Part of the software vs externally run
+% The infrastructure question: Part of the software vs externally run --> compared to admin bots, better communication!
 Another difference between bots and filters underlined several times in community discussions was that as a MediaWiki extension edit filters are part of the core software whereas bots are running on external infrastructure which makes them both slower and generally less reliable.
 (Compare Geiger's account about running a bot on a workstation in his apartment which he simply pulled the plug on when he was moving out~\cite{Geiger2014}.)
 Nowadays, we can ask ourselves whether this is still of significance:
 A lot of bots are run on Toolforge~\cite{Wikimedia:Toolforge}, a cloud service providing a hosting environment for a variety of applications (bots, analytics, etc.) run by volunteers who work on Wikimedia projects.
-The service is maintained by the Wikimedia Foundation the same way the Wikipedia servers are, so in consequence just as reliable and available as the encyclopedia itself.
+The service is maintained by the Wikimedia Foundation the same way the Wikipedia servers are, so it is in consequence just as reliable and available as the encyclopedia itself.
 The argument that someone powered off the basement computer on which they were running bot X is just not as relevant anymore.
 
-% general discussion on "platform" and what the metaphor hides? (e.g. bot develorpers' frustration that their work is rendered invisible?)
-
-
 % more on bots vs filters
 % collaboration possible on filters?
-% who edits filters (edit filter managers, above all trusted admins) and who edits bots (in theory anyone approved by the BAG)
-Above all the distinction of bots vs. filters: what tasks are handled by which mechanism and why? slides (syn!) into the foreground over and over aagain.
-After all the investigations I would venture the claim that from end result perspective it probably doesn't make a terrible difference at all.
-As mentioned in the paragraph above, whether malicious content is directly disallowed or reverted 2 seconds later (in which time probably who 3 user have seen it, or not) is hardly a qualitative difference for Wikipedia's readers. %TODO (although I'm making a slightly different point in the paragraph above, clean up!)
+% who edits filters (edit filter managers, above all trusted admins) and who edits bots (in theory anyone approved by the BAG)o
+When comparing the tasks of bots proposed in related work (chapter~\ref{chap:background}) with the content analysis of filters' tasks conducted in chapter~\ref{chap:overview-en-wiki} (see also discussion for Q3 in section~\ref{sec:discussion-q3}), the results show great overlaps between the tasks descriptions for both tools.
+From an end result perspective it doesn't seem to make a big difference, whether a problem is taken care of by an edit filter or a bot.
+As mentioned in the paragraph above, whether malicious content is directly disallowed or reverted two seconds later (in which time probably a total of three users have seen it if any) is hardly a qualitative difference for Wikipedia's readers. %TODO (although I'm making a slightly different point in the paragraph above, clean up!)
 I would argue though that there are other stakeholders for whom the choice of mechanism makes a bigger difference:
 the operators of the quality control mechanisms and the users whose edits are being targeted.
-The difference (syn!) for edit filter managers vs. bot developers is that the architecture of the edit filter plugin supposedly fosters collaboration which results in a better system (compare with the famous ``given enough eyeballs, all bugs are shallow''~\cite{Raymond1999}).
-Any edit filter manager can modify a filter causing problems and the development of a single filter is mostly a collaborative (syn!) process.
+The significant distinction for operators is that the architecture of the edit filter plugin supposedly fosters collaboration which results in a better system (compare with the famous ``given enough eyeballs, all bugs are shallow''~\cite{Raymond1999}).
+Any edit filter manager can modify a filter causing problems and the development of a single filter is usually a collaborative process.
 Just a view on the history of most filters reveals that they have been updated multiple times by various users.
-In contrast, bots' source code is often not publicly available and they are mostly run by one operator only, so no real peer review of the code is practiced and the community has time and again complained of unresponsive bot operators in emergency cases.
+In contrast, bots' source code is often not publicly available and they are mostly run by one operator only, so no real peer review of the code is practiced and the community has time and again complained of unresponsive bot operators in emergency cases~\cite{Wikipedia:EditFilterTalkArchive1}.
 (On the other hand, more and more bots are based on code from various bot development frameworks such as pywikibot~\cite{pywikibot}, so this is not completely valid either.)
-On the other hand, it seems far more difficult/restrictive to become an edit filter manager: there are only very few of them, the vast majority admins or in exceptional cases very trusted users.
-A bot operator on the other hand (syn) only needs an approval by the BAG and can get going.
+At the same time, it seems far more difficult to become an edit filter manager:
+There are only very few of them, the vast majority admins or in exceptional cases very trusted users.
+By contrast, a bot operator only needs an approval for their bot by the Bot Approvals Group and can get going.
 
 The choice of mechanism also makes a difference for the editor whose edits have been deemed disruptive.
-Filters assuming good faith seek communication with the editor by issuing warnings which provide some feedback for the editor and allow them to modify their edit (hopefully in a constructive fashion) and publish it again.
-Bots on the other hand (syn) revert everything their algorithms find malicious directly.
-They also leave warning messages on the user's talk page informing them that their edits have been reverted because they matched the bot's heuristic and point them to a false positives page where they can make a report.
-It is still a revert first-ask questions later approach which is rather discouraging for good faith newcomers.
-In case of good faith edits, this would mean that an editor wishing to dispute this decision should raise the issue (on the bot's talk page?) and research has shown that attempts to initiate discussions with (semi-)automated quality control agents have in general quite poor response rates ~\cite{HalGeiMorRied2013}.
+Filters assuming good faith seek communication with the offending user by issuing warnings which provide some feedback and allow the user to modify their edit (hopefully in a constructive fashion) and publish it again.
+Bots on the other hand revert everything their algorithms find malicious directly.
+They also leave warning messages on the user's talk page informing them that their edits have been reverted because the bot's heuristic was matched and point them to a false positives page where they can make a report.
+It is still a revert-first-ask-questions-later approach which is rather discouraging for good faith newcomers.
+In case of good faith edits, this would mean that an editor wishing to dispute this decision should raise the issue on the bot's talk page and research has shown that attempts to initiate discussions with (semi-)automated quality control agents have in general quite poor response rates ~\cite{HalGeiMorRied2013}.
+
+Compared to MediaWiki's page protection mechanism, edit filters allow for accurate control on user level:
+One can implement a filter targeting specific malicious users directly instead of restricting edit access for everyone.
 
 %TODO Fazit?
 
 \section{Q2: Edit filters are a classical rule-based system. Why are they still active today when more sophisticated ML approaches exist?}
-%* What can we filter with a REGEX? And what not? Are regexes the suitable technology for the means the community is trying to achieve?
 
-Research has long demonstrated higher precision and recall of machine learning methods~\cite{PotSteGer2008}. %TODO find quotes!
+Research has long demonstrated higher precision and recall of machine learning methods~\cite{PotSteGer2008}.
 With this premise in mind, one has to ask:
 Why are rule based mechanisms such as the edit filters still widely in use?
 Several explanations of this phenomenon sound plausible.
 For one, Wikipedia's edit filters are an established system which works and does its work reasonably well, so there is no need (syn) to change it  (``never touch a running system'').
-Secondly, it has been organically weaven in Wikipedia's quality control ecosystem with historical needs to which it responded and people at the time believed the mechanism to be the right solution to the problem they had.
+Secondly, it has been organically weaven in Wikipedia's quality control ecosystem.
+There were historical necessities to which it responded and people at the time believed the mechanism to be the right solution to the problem they had.
 We could ask why was it introduced in the first place when there were already other mechanisms.
 Beside the specific instancies of disruptive behaviour stated by the community as motivation to implement the extension,
 a very plausible explanation here is that since Wikipedia is a volunteer project a lot of stuff probably happens because at some precise moment there are particular people who are familiar with some concrete technologies so they construct a solution using the technologies they are good at using (or want to use).
 
 Another interesting reflection is that rule based systems are arguably easier to implement and above all to understand by humans which is why they still enjoy popularity today.
-On the one hand, overall less technical knowledge is needed in order to implement a single filter:
+On the one hand, overall less technical knowledge is required in order to implement a single filter:
 An edit filter manager has to ``merely'' understand regular expressions.
-Bot development on the other hand (syn!) is a little more challenging:
-A developer needs resonable knowledge of at least one programming language and on top of that has to make themself familiar with stuff like the Wikimedia API, ....
-Moreover, since regular expressions are still somewhat human readable and understandable (syn!) in contrast to a lot of popular machine learning algorithms, it is easier to hold rule based systems and their developers accountable.
+Bot development by contrast is a little more challenging:
+A developer needs resonable knowledge of at least one programming language and on top of that has to make themself familiar with artefacts like the Wikimedia API.
+Moreover, since regular expressions are still somewhat human readable and apprehensible unlike a lot of popular machine learning algorithms, it is easier to hold rule based systems and their developers accountable.
 Filters are a simple mechanism (simple to implement) that swiftly takes care of cases that are easily recognisable as undesirable.
-ML needs training data (expensive), and it is not simple to implement.
+ML needs training data (which expensive), and it is not simple to implement.
+What is more, rule based mechanisms allow for a finer granularity of control:
+An edit filter can define a rule to explicitely exclude particular malicious users from publishing, which cannot be straightforwardly implemented in a machine learning algorithm.
 
 %Fazit?
 
-%TODO incorporate these things:
-\begin{comment}
-* they were introduced before the ml tools came around.
-* rule based system allow for a more precise level of control: I can tell a filter "disallow edits by malicious users X, Y and Z. I can't tell this a ML based mechanism."
-\end{comment}
 
 \section{Q3: Which type of tasks do filters take over?}
+\label{sec:discussion-q3}
 
 % TODO comment on: so what's the role of the filters, why were they introduced (to get over with obvious persistent vandalism which was difficult to clean up, most probably automated) -- are they fulfilling this purpose?
 Juvenile and grave vandalism, spam, good faith disruptive edits (e.g. blanking an article instead of moving it because of unfamiliarity with the software and proper procedure), maintenance tasks.
diff --git a/thesis/introduction.tex b/thesis/introduction.tex
index 13f4f76fae9f8aa4f9108abebf5bb304ff29c41c..21e4bbc5ea180dddf7a362b9c26086b205ce6989 100644
--- a/thesis/introduction.tex
+++ b/thesis/introduction.tex
@@ -135,7 +135,7 @@ The aim of this work is to find out why edit filters were introduced on Wikipedi
 Further, this research seeks to understand what tasks are taken over by filters %in contrast to other quality control meachanisms
 and—as far as practicable—track how these tasks have evolved over time (are there changes in type, numbers, etc.?).
 %and understand how different users of Wikipedia (admins/sysops, regular editors, readers) interact with these and what repercussions the filters have on them.
-Last but not least, it is discussed why a classic rule based system such as the filters is still operational today when more sophisticated machine-learning approaches exist.
+Last but not least, it is discussed why a classic rule based system such as the filters is still operational today when more sophisticated machine learning (ML) approaches exist.
 Since this is just an initial discovery of the features, tasks and repercussions of edit filters, a framework for future research is also offered.
 
 %\section{Methods}
diff --git a/thesis/references.bib b/thesis/references.bib
index 268453524036ef082957cb0201ddefcea962c713..25f4cef3ff676cf74d55fd9564d5def296f8db1f 100644
--- a/thesis/references.bib
+++ b/thesis/references.bib
@@ -34,6 +34,15 @@
   note = {\url{https://journals.sagepub.com/doi/pdf/10.1177/2053951717726554}}
 }
 
+@misc{Elder2016,
+  title = {Inside the game of sports vandalism on {W}ikipedia},
+  author = {Elder, Jeff},
+  year = {2016},
+  month = {January},
+  note = { Retreived 24 July 2019 from
+            \url{https://blog.wikimedia.org/2016/01/06/sports-vandalism-on-wikipedia/}}
+}
+
 @inproceedings{ForGei2012,
   title = {Writing up rather than writing down: Becoming {W}ikipedia literate},
   author = {Ford, Heather and Geiger, R Stuart},