Dr Nick Kelly writes:
An image drawn from Buddhist philosophy is of a spider’s web covered with dew-drops. The significance of the image is that its beauty comes from the way in which each dew drop reflects all other drops. The droplets are interconnected in this complex way, and their beauty comes from these connections.
The image is a powerful one to introduce the theme of how we determine quality (of resources, people, information) in a society that increasingly relies upon the internet. The internet has changed and will continue to change many sectors of society by making it cheap (trivially so) and accessible (we all know how to do it) to duplicate and communicate data anywhere in the world. As a society we are still exploring the effects and potential of this.
This blog post is about one idea within this context; that much of the disruptive innovation driven by the internet has come about not simply through increased inclusivity but, rather, through innovation in ways to distribute the determination of quality.
Whilst it is often simple to ‘scale things up’ with the internet and get more people involved in something, the hard part is finding a way to similarly scale up the way that quality is determined. (For a subjective definition of quality we adopt the notion of ‘fit for purpose’)
Some examples are helpful for introducing the idea and distilling a common narrative (Kelly, Sie, & Suwer, In press):
- In the 90’s anybody could create a web page with HTML, and many people did. However, judging the usefulness of a given site was a difficult problem. Google changed all that with their use of eigenvector centrality (within theÃ‚Â ‘PageRank’ algorithm) to quantify quality based upon the interconnected network of web page links (Page, Brin, Motwani, & Winograd, 1999). A site’s value is judged based upon the links to it from other sites, and these links are weighted based uponÃ‚Â theirÃ‚Â respective value. By mining the graph of connections the value of each site can be determined. (A fantastic intuitive understanding of the complex nature of this calculation can be gained throughÃ‚Â thisÃ‚Â NetLogo simulation of PageRank).
- In research administration there is a desire to ‘manage’ the quality of research and hence to measure it. Many academics publish many articles, but which ones can be judged as high quality? The field of bibliometrics attempts to respond to this problem through content and citation analysis. Some of most popular metrics (e.g. the SCIMago Journal Rank or SJR as it is commonly known) utilise applications of the same eigenvector centrality measure used in Google’s PageRank.
- The company AirBnB has provided a platform such that anybody can open up their house for guests, thus allowing individuals to compete with hotels to provide accommodation. The difficulty in setting up this platform was not so much the ability to list their house (increasing inclusivity), but rather the challenge of ensuring quality in the listings – people wanting to stay in accommodation register by uploading ID documents, people listing houses need to provide accurate photos, and ratings and reviews play a part in providing constant community feedback.
- A new company Uber allows for anybody to use their car to provide ‘taxi’ services to others using the platform that requires this service. The online platform removes barriers to entry (inclusivity) allowing anyone with a car to compete in providing taxi services and it maintains quality in a way that is similar to AirBnB.
- eBay similarly allowed for increased inclusivity in online trading, allowing anybody to turn their home into a warehouse for selling goods. The challenge of ensuring quality involves legal protections, reviews, ratings and metrics such as number of goods sold.
These few cherry-picked examples have a common narrative: the internet makes it possible to implement processes on a grand scale. Wherever this occurs, a need is created to distinguish quality. The above examples can all be seen as disruptive innovations within their sector. It can be argued that the significant innovation in each case is the way in which quality is determined within the large-scale community.
The current fashion for MOOCs (the acronym for which is increasingly irrelevant) in higher education provides a useful case study. The model of students being taught by a lecturer and tutors on campus has been around for centuries and is difficult to scale. Putting digital course content online and making it open has been around in various forms for a long time but offered a different kind of education. Downes and Siemens introduced the term MOOC to emphasise the way that online courses designed in a specific way could be massive, open and, in this way, fulfil the aims of connectivist pedagogy (what are now known as cMOOCs). However, determining the quality of students within MOOCS (both formative to aid the students and summative for the ‘gatekeeping’ role of university education) is difficult within this expanded context. Innovations such as automated marking and peer-assessment have been used within the MOOC context but, as the two links show, also strongly criticised.
Recent work has begun developing a collection of ways in which quality is assured online and organising them into a taxonomy:
- Implicit measures: Users of a service to do what they would normally do. Quality is measured using implicit metrics, such as Eigenvalue analysis and content analysis (e.g. PageRank) or indicators of behaviours (e.g. seller ratings of eBay based upon volume sold, and measures of contributions on StackOverflow)
- Explicit measures: Users of a service or paid professionals take specific actions to ensure quality. Examples include paid staff rating contributions to the OER commons, ebay members rating their experiences, or Amazon buyers reviewing books.
A theory for quality in connected systems?
It is useful to identify and describe these measures of quality, but a more profound question is: Can we identify a technique that could aid this kind of innovation for determining quality?
An inspiring example is Shannon’s work in developing information theory (Shannon, 2001). Shannon saw that engineers were coming up with ways of sending and receiving signals with greater or less bandwidths and in the presence of noise (interference). Rather than contribute to ad-hoc, domain specific solutions, Shannon was able to develop a mathematical representation for the problem and consequent implications of this representation that make up the basis for much of the cryptography and communications that we use today.
What could such a representation, abstracted away from eigenvector centrality or specific measures look like? Or perhaps, at the very least, a general formulation of an approach that could be used in determining quality regardless of the medium or context.
This blog post is much more about questions than answers. The spiders’ web remains a beautiful symbol for our ever more connected world. Within recent centuries we have moved from a philosophy of the whole-in-the-parts to mathematical and applied representations of it. We propose that this way of thinking is particularly useful for distributing the determination of quality – there remains much more to be uncovered in applying this type of thinking.
If you would like to be involved in this research as a collaborator or higher degree research student, contact Dr Nick Kelly.
Image credit: Dewy Spider Web byÃ‚Â User:Fir0002Ã‚Â Ã‚Â Used under Creative Commons Attribution-Share Alike 3.0 Unported licence.
Kelly, N., Sie, R., & Suwer, R. (In press). Innovating processes to determine quality alongside increased inclusivity in higher education. In M. Keppell, S. Reushle & A. Antonio (Eds.), Open Learning and Formal Credentialing in Higher Education: Curriculum Models and Institutional Policies: IGI Global.
Page, L., Brin, S., Motwani, R., & Winograd, T. (1999). The PageRank citation ranking: bringing order to the web.
Shannon, C. E. (2001). A mathematical theory of communication. ACM SIGMOBILE Mobile Computing and Communications Review, 5(1), 3-55.