BIGwiki is your all-encompassing information, education, and research portal for all things Web3-game-related, the go-to space for creators, developers, gamers, investors and anyone interested in the Web3 space.
BIGwiki comprises a team of committed individuals, each an expert in their field. At its head is Michael Dennis, Chief Conspirator with 30+ years’ experience in web and technology. Supporting Michael is an expert team of lifelong gamers, graphic designers, AI, UI, and UX specialists, and 25+ years of experience in academia.
Share this section
Well, at its core, BIGwiki provides a suite of community-centric tools to equip you, the User, with the knowledge, skills, and resources to reach an informed decision regarding the “quality” of your favorite Web3 games. But what do we mean by “quality”. For now, think of quality simply in terms of “Goodness”.
Share this section
This is a great question, and it gets to the heart of what we are. Central to our evaluations is Big Income Games Artificial Intelligence System (BIG Ai-system), currently employing a combination of different state of the art (SOTA) large language models (LLMs). In that sense, we consider ourselves LLM agnostic, exploring different models and adapting our approach as the technology develops. For example, later iterations will use other open source models specifically designed to aid interpretation of emotion and sentiment as expressed within text.
Regardless of the model combination, however, and as a “nod” to all our wonderful mothers, we affectionately termed our first in-house BIGwiki Agent, BIGmomma.
Share this section
Dual-Purpose Role of our BIG Ai-System
Our BIG-Ai system serves two fundamental roles in the research process.
1) On the one hand, BIGwiki have “created” the persona, BIGmomma. As such, as an entity in and of herself, like a human, BIGmomma assesses the data, and her research output is somewhat like her “opinion” in answer to the question “Is this game any good?” BIGmomma thus functions like any other User, in that her research represents the views and opinions of one, and only one such User.
In future iterations, BIGwiki will have multiple personas, each with their own distinct “personality”, and therefore lens through which they interpret the given sources. Note also, that it is we, the BIGwiki team members, that “set” the initial characteristics that influence the ultimate “form” of that personality.
2) We also use the BIG Ai-system as a multi-purpose research tool to help streamline the research process. For example, it is used as a moderation tool to facilitate fact-checking and editing, and as a summarization tool to summarize the collective research, views, comments and opinions submitted by our population of users.
The Research and AI departments continuously conduct trials to determine “where” and “how” best to implement BIG’s AI system in the research process.
Share this section
The Research Process:
Exactly, and that is what sets us apart. Drawing on 25+ years of academic expertise, evaluating a Web3 game follows the basic principles of a classic “Systematic Review”, which aims to “identify, appraise and synthesize the available evidence meeting pre-specified eligibility criteria to answer a specific research question.” [1]
In this instance, the fundamental research question asks, “Is this game any good?” To address this, each game is critically evaluated on seven distinct criteria or sections (Gameplay, Earn, Tokenomics, Team Behind the Game, Partners and Investors, Community, and Roadmap & Deals), with each section further subdivided into subsections. For example, the Gameplay section is divided into Summary, How to Play, Gamer Engagement, Player Skills & Strategy, Graphics & Trailers, and Unanswered Questions subsections.
Subsection-specific questions are then formulated; for example, in “Graphics and Trailers,” we ask, “Are the game’s graphics high quality, and do they enhance the gaming experience and overall gameplay?”
Share this section
Each question is addressed via a three-step process: 1) sourcing the relevant input data, 2) configuring and running the AI, and 3) moderating the output. Note the intricate synergy of human-academic expertise and AI input at each stage of the research process.
i) Sourcing the Input Data:
The quality of our research output rests, at least in part, on the quality of the sources supplied to BIG’s AI system. As such, given the plethora of data available, including fake news, biased opinions, and poorly referenced research articles and blogs, we manually select the input sources supplied to the AI. In this way, we hope to significantly improve the quality and reliability of the input data. For example, where available, we provide the AI with the companies’ whitepaper or litepaper, relevant and reliable reviews, research articles, blogs, social media posts, video recordings, and the game’s tokens and blockchain’s fundamental data metrics.
Share this section
For web-based sources (ANY source), we employ the “CRAAP” (/kræp/) Test developed by librarians at California State University [2]. This involves a vertical assessment of whether a source is reliable and credible enough for our research.
Applying the “CRAAP” test involves an all-things-considered [3] case-by-case evaluation of the following five categories: Currency, Relevance, Authority, Accuracy, and Purpose, with only “CRAAP-Approved” sources being provided to our BIG.Ai-system.
Share this section
ii) Configuring the AI and Drafting the Research Output:
Before supplying the AI with CRAAP-approved sources, our AI team set the underlying parameters under which the AI operates, thus influencing the character of the AI researcher. In this way, the human element influences the lens through which the AI researcher interprets information. That is, within the context of the preset parameters and given the human-approved input sources, BIGmomma answers the research question.
We use a Retrieval-Augmented Generation (RAG) system for information retrieval, however, exactly how BIGmomma perceives and interprets the relevant chunks of information from those sources is the mystery that occurs within the AI’s Blackbox.
This is one source of inevitable uncertainty in our product since, despite our best efforts, we will never truly know at any one time what all the research on site actually “looks like”. We will not have checked every fact, identified every missing data point, read every new summary re-run, or read all the games’ research.
We aim to become familiar with this uncertainty and get to know it, its extent, magnitude, and potential impact. Our objective is to reduce this uncertainty by 1) improving the research process and 2) better understanding the product, that is, the research output.
The immediate output from the Black Box is our primary research product, which needs fact-checking and moderating.
Share this section
Before discussing the mechanics of moderation, it is important to note that we are mindful of the extent to which we modify the product. At this stage of the process, the product is BIGmomma’s assessment of the human-determined input sources, and our moderation must not change that identity. So, a key question is, “Will our proposed moderation improve the quality of the research output?” Changing the identity of the AI, for example, would lessen the quality of the product.
iii) Moderation:
Primary and Secondary Moderation:
Consider moderation comprising “Primary” and “Secondary” moderation, where primary moderation refers to us moderating BIGmomma’s output, and secondary moderation refers to us moderating user-uploaded content. For now, we only need to think about primary moderation.
Primary Moderation involves two time-dependent steps: 1) content moderation and 2) copywrite moderation.
1) Content moderation
At its simplest, content moderation involves i) identifying “hotspot” areas of investigation, where hotspots are those areas in the research most likely to contain hallucinations, errors, and/or inconsistencies, ii) critically evaluating those hotspots, and iii) implementing a moderation-decision based on those evaluations.
a) Hotspot Identification:
Hotspots are identified through a blend of human and AI-assisted moderation tools. Human-identified hotspots are those areas in the research that are recognized as ALWAYS needing checking. Examples include, but are not limited to, numbers and statistics, differentiating between internal and external factors, and differentiating between company- and game-level focus when evaluating SWOT and USPs and Flaws of the Game, respectively. AI-identified hotspots are those identified using the variety of AI-assisted moderation tools we are currently exploring.
b) Critical Evaluation
Critical evaluation involves humans investigating the hotspots to determine whether moderation is required.
c) Moderation-based Decisions
“How” the content is moderated depends on the “type” of content. A couple of examples will help to illustrate, as follows:
Share this section
Warnings – Warnings are also part of content moderation, but they present themselves and impact the output differently depending on the nature of the information.
To understand this, it is important not to conflate the different conceptualizations of “warnings”. We consider both internal and external warnings. Internal warnings are generated to notify us of a call to action, i.e., to investigate a major news event affecting one of the games that has not yet been reflected in the input sources (or the research output). External warnings are those that we see on the user interface (UI), warning Users about some specific issue associated with the game that BIGmomma does not seem to have sufficiently assimilated, i.e., a possible inaccurate reflection of an issue in the score, a failure to recognize the extent of a threat, or a failure to discuss an issue etc. It follows, therefore, that internal warnings can generate external warnings but not vice versa.
Note we do not manipulate BIGmomma’s scores, even if we think her assessment of an issue has overstated or underplayed its significance. Rather, we will add comments or opinions. Note also that scores could be misrepresented because of missing data.
Missing Data
The concept of “missing data” is inherently complex. Just as an example, there is missing data that exists (of real interest to moderators) and missing data that does not exist (of less interest to moderators); there is missing data that would improve the research if it existed, and missing data that would not enhance the research even if it existed.
Share this section
First, recognize that we will learn about “missing data” in one of two ways; either we or our users will find it. Either way, it is processed similarly, with the critical factor being whether an appropriate source discussing or documenting the missing data issue is available.
If we can find a reliable source, we upload the source and re-run the AI with the new information. We then re-moderate the output to see if the information has been incorporated, with explainers added as necessary, i.e., perhaps to explain that specific information was provided but that BIGmomma had not assimilated it, or perhaps also to explain that the information was assimilated but BIGmomma’s reduction, or addition, in the score, was less, or more, than we expected – again, to open the discussion and encourage user uploads, comments etc.
Just to illustrate the kind of scenarios (with solutions) as they are likely to present themselves around missing data and how we will deal with the different circumstances:
Note the type of missing information represented and the extent to which we opinionate the response will determine if it becomes an “Explainer”, an “Edit”, or a “Comment”.
2) Copywrite moderation
Copywrite moderation is an ongoing project as we work with the improvements in the AI. For now, our focus is two-fold: 1) Tightening up BIGmomma’s arguments – not to change those arguments or points, but to tighten up the logical delivery and cohesion in those arguments and points, and 2) Improving the “online-friendly” readability and SEO functionality of the writing.
The post-copywrite moderated product is BIGmomma’s final product, which, as should be clear, is the culmination of an intricate balance of human-Ai interaction at each stage of the research and moderation process.
Ideally, the product would be released on-site at this stage of its evolution! This is our aim; however, given the time it currently takes to inspect the research, early iterations will likely be copywrite moderated through a live on-site workflow process. Moreover, while striving for a robust academic enquiry, minor errors, inconsistencies, and points of contention will inevitably slip through, which is where you, the Community, come in.
Our Community of Users can interact with the research in various ways. This includes, but is not limited to, Users leaving comments in response to BIGmomma or any other User-uploaded research, and Users uploading their own detailed research.
Share this section
This is a great question, and the answer is a little tricky. First, it helps to understand how we process information at the summary level when multiple users’ research is uploaded to the site.
Recall that the research approach breaks down the analysis of a game into seven discrete Sections, each subdivided into Subsections. Gameplay, for example, is divided into Summary, How to Play, Gamer Engagement, Player Skills & Strategy, Graphics & Trailers, and Unanswered Questions, whereas the Team Behind the Game section is divided into Summary, Founders and Advisors, Core Team, and Unanswered Questions.
BIGmomma’s summarization tool produces summaries at three levels: at subsection-, section-, and game level, and it is at the subsection level where Users’ comments interact directly with BIGmomma’s assessment of the game.
Community Moderation via Subsection-Level Comments
Users can add comments on the site at the subsection level. How these comments are processed depends on the nature of the comment, in much the same way as with primary moderation. For example, if the comment notifies us of an “error” in data metrics, the data point is investigated and updated accordingly. Alternatively, if the comment concerns “content”, like relevant facts, opinions, commentaries, etc., the information is assimilated directly at the subsection level and indirectly at the section- and game-level summaries.
In this way, Users can comment on BIGmomma’s “opinion” of a game. In addition, Users may wish to offer a more detailed assessment of a game, which they can do by uploading their own research.
User-Uploaded Research
From Phase II, Users can upload their own detailed research, precisely like that produced by BIGmomma. Users can choose to upload one subsection, one section, or even a whole game’s level of research, with $BIGGIE rewards reflecting the extent, complexity, and significance of the contributed research. Detailed explainers regarding BIGwiki’s $BIGGIE reward system are forthcoming!
Share this section
Population of Opinions: Consensus View
BIGmomma’s assessment or “opinion” of the game is one such opinion. For the consensus view, that is, all the different Users’ opinions, we need to look at the collective contribution of our User-uploaded research.
The pool of user-submitted research forms the “Population of Opinions”, with User and BIGwiki-generated submissions carrying equal weight. To formulate the consensus view of a game, BIGmomma’s summarization tool is used to generate summaries of the population of opinions at three levels: 1) at the subsection level, 2) at the section level, and 3) at the game level. Think of these as “rolling summaries”; that is, BIGmomma regularly updates the summaries as new research is submitted.
Share this section
Open Research Project – Live Case Study
That is basically the long-term vision, yes! BIGwiki is essentially an ongoing, open and, transparent, community-driven and AI-fuelled ever-evolving research project striving to undertake an extensive systematic evaluation of the Web3 gaming market.
Share this section
Yes, you’re getting it. More specifically, the research process is vital for the actual methodology and study design to give rise to the research product. Moreover, we do not have complete control of the final product, remember (the Blackbox is always gonna have its say), but what we can control, at least mostly, is the research process. Given our current resources, we strive to ensure our methodology is academically robust.
We will do everything we can to ensure the validity of the research process, but it is ultimately the collective contribution of our research community that counts. Our Community of Users will refine the final product: errors, inconsistencies, differing opinions, and, most importantly, User-uploaded research.
Share this section
Referencing and Internal Navigation System (RINS)
This is an excellent question. Essentially, we are using a single system, an academic referencing system, in two ways: a referencing system AND an internal navigation system. The AI and Research Departments are currently improving the granularity of the internal navigation elements of the system. Once this is complete (Phase II), we will have a fully functional referencing and internal navigation system.
For now, we are focused on reporting the sources. We have included a full “Bibliography” section that lists all the sources supplied to BIG’s AI system.
Share this section
References:
[1] About the Cochrane Database of Systematic Reviews | Cochrane Library. Accessed October 7, 2024. https://www.cochranelibrary.com/cdsr/about-cdsr
[2] California State University. CRAAP Test - Source Evaluation - LibGuides at California State University, Fullerton. 2024. Accessed October 7, 2024. https://libraryguides.fullerton.edu/CRAAP
[3] Chang R.’ ALL THINGS CONSIDERED”*. Philosophical Perspectives: Ethics. 2004;18.
Share this section