For the online environment have been built.

For most people in the developed world the Internet has become
an intrinsic part of our life, facilitating everything from online shopping to
connecting friends and family members, to even introducing strangers to one
another. Not only does the Internet provide connectivity, it has the potential
to introduce new ideas and ways of thinking about issues that shape our lives.
It has also brought about new challenges and opportunities – while humans use
the Internet to access news and information, shopping sites and for social
interaction, etc., a new type of technology has been created to automate and to
help facilitate some of what we undertake online. Bots are small pieces of
software that can be programmed to help users gain clarity about a transaction,
reducing business and agency costs; for example, businesses can use bots to
replace humans as agents on commercial websites in order to help consumers
facilitate a transaction or to act as an automated customer service agent.1 However, bots can self-replicate,
become “smarter” and can be pre-programmed to trick, manipulate, and deceive
users into making decisions that they otherwise would not make. This is largely
in part due to the dramatic improvements in artificial intelligence (AI)
software capabilities. There are two troubling issues for regulators to
consider: first, the technology has become so advanced that human users cannot
always determine when they are communicating with a bot. Second, algorithmic
processing and data mining can target deceptive practices at a particular user
or a collective of Internet users in order to start an informational cascade,
or to give the appearance of an organic groundswell of opinion that, amongst
other problems, validates fringe beliefs through confirmation bias.

This thesis posits an improved regulatory model for the
regulation of online content, including false and deceptive content that
accounts for the mental shortcuts humans use when making decisions. The
argument is that building heuristics into regulatory models will encourage
regulators to recognise shortcomings in law and regulatory design, replacing
forms of rationality as the foundation upon which regulatory models for the
online environment have been built. Extensive academic scholarship2 has established that we are
prone to influence (emotional, cultural traditions, genetic heritage, social
pressures, etc.) and cognitive manipulation (changes in the environment,
heuristic strategies, anchoring, and framing, etc.). These advancements in our
understanding of how people make decisions has empowered agents to deploy
deceptive practices across digitally mediated platforms (DMPs) that could
facilitate a commercial transaction or manipulate political debate and
deliberation that (a) may not be in the actor’s interest and (b) would be
regulated in the offline world. This thesis argues two important, but
connected, points. First, users are entering into commercial transactions that
are not in their best interests; automation and algorithmic processing are being
used to take advantage of our decision-making heuristics at a scale not
possible in the “real world”. Second, our reliance on heuristics may be
advantageous in some situations in the online environment, but they also result
in error, distortion, and systematic biases in our thinking. As a result,
propagators are using deceptive techniques to influence our political
decision-making at an unprecedented and unquantifiable extent that is only
possible through the use of technical features offered by DMPs. Accordingly,
this thesis will argue that regulatory frameworks are not presently sufficient
for these types of deceptive practices and are harming our collective and
individual ability to maximise our own utility in order to make the best
decisions for our own well-being.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

The methodology of the thesis is a part-doctrinal analysis of
legislation, regulatory frameworks, and academic research that covers
commercial transactions and political deliberation in both the offline and
online environments. This method was used in order to make a comparison of the
rules and principles as applied to deceptive practices across DMPs. However,
given the focus on heuristics and user psychology, there is also an extra-legal
and policy focus throughout. This partly reflects that the law has struggled to
keep up with technological change and partly reflects what insights we have
learned from behavioural economics and cognitive and social psychology. Thus,
it is a critique of cyber-legal governance, informed by insights from behavioural
economics, cognitive and social psychology. Central is the concept of
heuristics, the idea that we use mental shortcuts when making decisions;
therefore, it focuses on how humans actually behave in the online environment
more than previous works in this area and encourages the development of a
heuristic model for the regulation of online deception. Previous models by
Lessig3, Murray4,
and Laidlaw5 are built on the concept of
rationality, the prevailing theory of classical economics, working from the
presumption that human beings have preferences that are rational, that we
maximise utility, that companies operate solely for the purpose of profit
maximisation while human beings make decisions independently from each other,
based on the information available. Accordingly, the methodology for this
thesis is hybrid, shifting between an analysis of cyber-regulation and
governance, law and rules of user interactions in the online environment, and
insights from the more recent fields of behavioural economics and cognitive
psychology. Ultimately rooted in regulatory theory, it seeks to determine if
the identified deceptive practices are harms, and how the law presently
addresses these harms in the “real world” before comparing the law to similar
deceptive practices identified in the online environment. It then examines
whether the law sufficiently addresses these harms and asks where
insufficiencies lie – what is the best way to address the problems? The thesis
asks and answers the following normative research question: Can traditional models of nodal and
decentred regulation, as applied to cyber-governance theory in the works of
Lessig, Murray et al, adequately design models to regulate deceptive practices?

Having reviewed the cyber-regulatory literature, Chapter One lays out the theoretical
framework for the balance of the thesis. It categorises theories of thought
about Internet governance and regulation into “waves”, starting with the early
frontiersman like John Perry Barlow, before moving to a discussion of the most
famous of the cyber-libertarians, Lawrence Lessig, whose work emphasised the
role of code in cyber-regulatory models. It then moves on to critique Andrew
Murray’s discussion of network communitarianism and Emily Laidlaw’s work on
nodal governance and Internet gatekeepers before concluding that the weakness
in all these models is that all base their models on presumptions of actors in
the online environment behaving rationally. Throughout the critique of their
models, the focus is on their decision to assume the actor self-maximises and
always acts in their best interest from the available information, with any
individual irrationality overcome by the collective action of other rational

The thesis then moves on to a discussion in Chapter Two of the mental shortcuts we
tend to use and the abundance of empirical evidence that humans normally use
heuristics when faced with making decisions or are forced into making
judgements. One school of thought, the Fast and Frugal, argues that heuristics are
positive and often useful and that they developed for a variety of evolutionary
reasons, particularly in information deprived environments. Importantly, one of
the important insights from the Fast and Frugal School is the discovery that
the way a question is framed within an environment leads to different outcomes.
Conversely, two Israeli cognitive psychologists, Daniel Kahneman and Amos
Tversky, argue that our use of heuristics is problematic and makes us prone to
errors and systematic biases. While not discarding the work of the Fast and
Frugal school completely, this thesis focuses on the errors and biases we are
prone to making when asked to make quick and/or emotional decisions across a
variety of judgement paradigms found in online environments where neoclassical
economics fail; for example, rational models assume that in uncertain
situations humans will pay the expected value of an item, yet Kahneman and
Tversky showed that humans do not weigh decisions at all using economic value,
one of the foundational tenets of neoclassical economics. Their research also
shows we deviate from rationality when the cost of the decision is small but
the potential benefit is large, or when losses are unlikely but substantial,
that we under estimate risk, are loss averse and lazy decision-makers,
especially faced with maths problems or interpreting statistics.  The chapter also highlights areas where
rationality is presumed in policy and regulatory design for the online
environment and where presumptions about rationality cause failure in
regulatory outcomes.

As our understanding of cognitive and social psychology has
developed allowing us new insights into human behaviour, a new breed of
deceptive practices have developed that take advantage of our reliance on
heuristics in the online environment. DMPs have amplified the ability of
propagators to use technology to their advantage: identify the errors humans
typically make when making decisions in the online environment, develop a
strategy to take advantage of those errors, and then use technological features
of the DMP to deploy campaigns that either create a commercial advantage or
attempts to influence others to align with their own political ideology. Chapter
Three emphasises that certain characteristics of DMPs make it possible for
the rapid dissemination of deception. By analysing the deception literature,
the chapter suggests that systems theory can be used to posit typologies to
assist the reader to understand the various types of deceptive communications.
The chapter also delves into the deception literature to understand why humans
are particularly susceptible to believing, interacting with, and sharing materials we view on DMPs. It goes
on to posit two major claims: (a) some forms of deceptive campaigns are regulated
under existing regulatory frameworks, but the online environment frustrates or
interferes with consumers’ abilities to make optimal choices in their best
interest; moreover, DMPs are increasingly used to facilitate harm that would
otherwise be regulated in the real world. 
Accordingly, (b) some practices undertaken by businesses and their
agents are not presently covered by regulatory frameworks but are causing
consumers to enter into sub-optimal commercial transactions that are not
covered by present regulatory frameworks, but should be.

Chapters One to Three thus set out the theoretical framework
for the thesis, identify the problems to be analysed, and set the research
questions. The problems that arise from using rationality as a regulatory
foundation in models that do not account for deceptive practices are examined
through the use of case studies in Chapters Four and Five. 

Budgeting our money appropriately is
a cornerstone of the global economy. Yet far too many people worry about how
they are going to pay their bills at the end of the month.6
Economists usually assume that it is easy to spend according to a budget.
However, even if we make good decisions almost all of the time, the tiny amount
of bad decisions can undo all prior rectitude. Businesses are acutely away of
those moments by targeting events in our lives where emotion trumps economic
caution. A source of anxiety comes from deceptive practices: we are prone to
excessive payments when out-with our comfort zone. When deceit sneaks into
transactions, when consumers often pay excessive amounts for cars, houses or
underappreciate the risks of financial investments, economic catastrophe
occurs. Market equilibrium is a fundamental concept of economics and of
Murray’s theory of network communitarianism.7 The
theory of equilibrium suggests that users in the online environment will pick
out the best opportunities because they will go looking for them. This works
great in the “offline world”, because consumers have to labour to find
bargains. But in the online world, DMPs act as the perfect platform to
facilitate automated searches for those vulnerable at any given moment to
deceptive business practices. Chapter Four comprises
a case study of deception’s role in consumer transactions in the online
environment to determine compliance with the aims and objectives of the Unfair
Commercial Practices Directive and its implementing legislation and domestic
regulation on advertising and marketing. The chapter argues that there are
serious shortcomings in both the US Federal Trade Commission’s rules and guidelines
on consumer protection and the Advertising Standards Agency’s CAP Code for
protecting consumers from misleading advertising and marketing campaigns. By
analysing common errors and biases found during consumer transactions, the
chapter focuses on how basing consumer protection from the point of view of the
rational actor exacerbates the potential for deceptive practices across DMPs.
The case study reveals three major issues that need to be addressed by
regulators: first, these forms of deceptive practices encourage consumers to
make sub-optimal choices.  Second,
outlawed and banned forms of marketing in the offline world spread with ease
across DMPs. Third, because the case study focuses on both permitted and prohibited
behaviour before, during, and after a commercial transaction, some forms and
techniques for branding and advertising fly underneath the regulator’s radar,
evading consumer protection controls that would otherwise protect traditional
offline transactions of a similar nature.

Chapter Five’s
case study examines the role of deception across DMPs to manipulate political
opinion and to influence the outcome of elections. The subject matter of this
case study is justified for a number of reasons: First, the European Court of
Human Rights has stated that, in the period before or during an election, it
may be considered necessary “to place certain restrictions, of a type which
would not usually be acceptable, on freedom of expression” in order to secure
the “free expression of the opinion of the people in the choice of the
legislature”.8  In many instances, this may mean that additional
regulation is imposed on traditional media outlets, but not as yet on
online media and DMPs, thus permitting deceptive practices to flourish during
“online media campaigns”.9  Yet during the UK Parliamentary elections in
2015, £1.6 million was spent on political advertising through Facebook and
Google alone.10 This figure was double the
amount spent on campaign broadcasts, and five times that spent on newspapers.
Political parties and advocacy groups spent a similar figure on online
political advertising during the UK’s referendum on EU membership.11  Second, while 2017 might have been dominated
by the UK election, 2016 was dominated by the US election and the referendum on
the UK remaining in the EU (Brexit). In addition, six Member States of the
Council of Europe (CoE) held presidential elections, parliamentary elections
were held in twelve Member States, five referenda were held other than Brexit,
over half (27) of the CoE Member States held various elections and referenda
and there were elections and referenda in over half (25) of the CoE members
states in 2017.12  Moreover, a number of European bodies, such
as the CoE’s Parliamentary Assembly, have noted their concern over the
influence of online media on elections. Indeed, the Parliamentary Assembly
adopted a Resolution in 2017 expressing its concern at campaigns launched
online “with the objective of harming democratic political
processes”.13 The case study examines the
role DMPs play in facilitating deceptive influence during elections and how
propagators use DMPs in a variety of ways in attempts to manipulate political

Six draws together the findings from the case studies and
examines their significance to the question of whether our present regulatory
frameworks are sufficient for protecting consumers and democratic deliberation
and addresses whether a heuristics model for the regulation of online deception
would be more appropriate than our current models based on rationality. Through
an examination of common errors and biases in the online environment and the
motivations behind the creators and propagators of deceptive campaigns, some of
the criticisms of our present models identified at the start of the thesis are
addressed – concluding that recognising behavioural markers in regulatory
approaches could rectify the deficiencies identified in the thesis and
throughout the case studies.

The thesis examines materials up to June 2017. First, it will be
obvious that this is an incredibly fast moving area with facets changing and
developing as the reader progresses. Second, while the nature of regulatory
analysis of the online environment tends toward an international focus, the
focus of the case studies is commercial transactions and political
deliberation. These have been chosen because of their impact on the global
economy and to provide a critique on the role of deception, artificial
intelligence (AI) and DMPs play interfering with the democratic process.  Although, the thesis critiques US and UK
frameworks, there is some limited discussion of European legislation,
particularly the Unfair Commercial Practices Directive and the regulations that
give it effect in domestic law. As the aim of this thesis is to address
shortcomings in a variety of regulatory measures, the emphasis remains on
cyber-regulatory theory.

1 Recent figures
indicate that 180 bot-related companies have attracted $24 billion in funding
to date: See Donal Power, ‘The bot
invasion is on, powered by $24B in funding’ (ReadWrite, 9 April 2017) available at: accessed 16 July

2 R. B. Cialdini, Influence (Vol. 3) (Port Harcourt: A.
Michel 1987); R. B. Cialdini, ‘Harnessing the Science of Persuasion’ (2001)
Harvard Business Review 79(9), 72-81; A. Tversky and D. Kahneman, D. (1975).
‘Judgment Under Uncertainty: Heuristics and Biases’.in D. Wendt and C. Vlek
(eds), Utility, Probability, and Human
Decision Making (Springer Netherlands, 1975) 141-162; R. Thaler, ‘Mental
Accounting and Consumer Choice’. (1985) Marketing Science 4(3), 199-214; T. C.
Leonard, Richard H. Thaler, C. R. Sunstein, Nudge: Improving Decisions about
Health, Wealth, and Happiness’ (2008) Constitutional Political Economy 19(4),
356-360; H. A. Simon, Models of Bounded
Rationality: Empirically Grounded Economic Reason (Vol. 3) (MIT press,

3 L. Lessig, Code and Other Laws of Cyberspace (Basic
Books, 2009).

4 A. D. Murray, The Regulation of Cyberspace: Control in the
Online Environment. (Routledge, 2007).

5 E. B. Laidlaw,
‘A Framework for Identifying Internet Information Gatekeepers’ (2010)
International Review of Law, Computers & Technology 24(3), 263-276.

6 The number one
stressor in American lives is money. Stress about work (closely tied to money)
came second. American Psychological Association, ‘Stress in America: Paying
with our Health’ (4 February 2015) available accessed
15 August 2017.

7 Andrew Murray,
The regulation of cyberspace: control in the online environment (Routledge
2007); Andrew Murray, ‘Symbiotic regulation’ 2008 L(26) J Marshall J Computer
& Info 207-228; Andrew Murray, ‘Nodes and gravity in virtual space’ 2011
5(No 2) Legisprudence 195-221.

8 Bowman v the United Kingdom App no
24839/94 (ECtHR, 19 February 1998), para 43 available at accessed 13 July

9 Parliamentary
Assembly of the Council of Europe, Resolution 2143 (2017) Online media and
journalism: challenges and accountability, 25 January 2017 available at
accessed 13 July 2017.

10 The Electoral
Commission, Party spending in the 2015 UK PGE

11 The Electoral
Commission, Spending in the 2016 Referendum on the UK’s membership of the EU
available at: accessed 13 July

12Council of
Europe, 2016 electoral calendar of the member states of the Council of Europe
available at accessed 13 July

13 Note 9, supra, CoE Resolution.