Procedimientos

Methodology of Cazadores

This is a translation of the section “Metodología Cazadores” →

Cazadores de Fake News detects and verifies mis-disinformation in Venezuela and Latin America. Our verification process, criteria, and principles are summarized in this methodology.

What cases do we verify and why?

All the cases we verify meet at least one of the following key criteria:

  1. Virality: We prioritize cases that are viral on at least two social networks, received through our Central Cazadores chatbot, circulating on another social network, or broadcasted in mass media such as radio, press, or television.
  2. Relevance: We select cases highly relevant to Venezuela or Venezuelans. We prioritize those related to critical issues such as public health, elections, civil coexistence, discrimination, natural disasters, and influence operations on public interest topics, or those deployed against journalists, media, or human rights defenders.
  3. Verifiability: We only verify claims that can be proven or disproven with existing, reliable information sources and tangible data. We do not verify opinions or predictions.

We will not publish debunks on cases that are not viral or relevant to avoid unnecessarily amplifying their exposure to the public and exaggerating the significance of both the misinformation and the actors contributing to its spread. Our organization only verifies data and facts that can be cross-checked and does not verify opinions, analyses, or predictions, although we can point out falsehoods that may underpin each case.

At Cazadores de Fake News, we believe that civil society should receive accurate and transparent information, without biases, to strengthen its resilience against misinformation. Therefore, we address misinformation regardless of its origin or whom it affects. We consider it essential to combat it if it has the potential to cause harm, without considering the political spectrum or the power actors from which it comes or whom it affects.

Our philosophy on the fight against mis-disinformation, regardless of its source, is detailed in a document we call the Cazadores Manifesto.

How do we conduct our verifications?

Our verifications consist of three steps:

  1. Detection: The editorial team members continuously monitor social media posts, content websites, groups on instant messaging services, and media outlets such as radio and television. We also evaluate reports sent by our community members through the “Central Cazadores” chatbot, our Telegram groups, or those received through social networks, instant messaging, or via email at prensa@cazadoresdefakenews.info. Before starting the investigation of each case, an editor evaluates its virality, relevance, and verifiability.
  2. Investigation: A verifier investigates the origin of each studied case, searching for primary sources, understanding its context, and finding evidence to debunk (or confirm) it, conducting searches with open sources, or if necessary, using advanced data and multimedia content analysis techniques and tools. At the end of this phase, the verifier creates a first draft.
  3. Writing and Editing: The first draft is reviewed by an editor, confirming each argument presented by the verifier, adding additional arguments if necessary, and evaluating if it meets the organization’s style and quality requirements.
  4. Publication: Once the previous step is completed, the final content is generated and published on the cazadoresdefakenews.info website and the organization’s social networks. The resulting article will present the evidence supporting each finding precisely, avoiding value judgments or expressions that could be interpreted as such, and with the necessary hyperlinks so the reader can verify the cited sources and data.

How do we classify our verifications and investigations?

To classify the cases we investigate and verify, we use several categories that allow us to evaluate and clearly communicate the nature of the evaluated content:

  1. Falso (False): When the information is completely false, a totally fabricated statement, content published by an account impersonating another, a falsified document, a photo or video whose original meaning has been distorted due to digital manipulation, entirely synthetically created content, or a social media post with no evidence, not archived, backed, or interacted with in any way that allows verifying it was ever made. It is identified with a red ribbon and the text “False”. Some examples of checks with this category include this, this, and this.
  2. Engañoso (Misleading): When the information has some legitimate or true element that can be verified, but it is presented out of context or misleadingly. It is identified with a purple ribbon and the text “Misleading”. Some debunks labeled with this category are this, this, and this.
  3. Lo que sabemos (What we know): When we want to provide context on a public interest case where important information should be presented, but not all details are known. It includes situations with two inconclusive or even contradictory versions of a case, misinformation cases, or misleading narratives mixing false, misleading, rumors, and unverifiable information at that time and due to their complexity, cannot be classified simply as “False” or “Misleading”. It is identified with a blue ribbon and the text “What we know”. Among the articles published with this classification are this, this, and this.
  4. Verdadero (True): In special cases, when it is necessary to reiterate the veracity of content that can confuse or that was previously, in a different context, classified as “False” or “Misleading”. It is identified with a green ribbon and the text “True”. Some examples of this category are this, this, and this.

What measures do we take to identify misleading pieces and other informational disorders so that they are not confused with authentic content published by regular users?

When documenting content that is not authentic nor published by regular users – generated by fake accounts, in coordinated campaigns, created with generative Artificial Intelligence (AI) – and other informational disorders – hate speech, discriminatory, propaganda, misinformation, doxxing – we will identify the multimedia piece with a label describing the type of content shown. This will allow users to immediately recognize that it is misleading elements or other types of informational disorders, regardless of whether they contain false, misleading, or true information.

In all the mentioned cases, the labels are identified in yellow, with the word corresponding to the category, an alert triangle, and are placed on the piece so that they cover at least 20% of its surface, to be easily detectable.

This is an example of the use of these labels for misleading pieces and other informational disorders.

Criteria for identifying misleading pieces and other informational disorders

Fake Accounts

At Cazadore de Fake News, we consider fake accounts (trolls, bots, and accounts with bot-like behavior) as digital assets regularly used to spread misinformation in the context of influence operations.

When investigating fake accounts, we have two approaches:

Isolated Fake Accounts: In this category, we consider, for example, fake news outlets or isolated troll accounts spreading misinformation, participating in stigmatization campaigns, or spreading toxic content (discriminatory, cyberbullying, etc.). We will investigate accounts in this category if it can be proven they have been involved in more than five cases during a year. When explaining our findings about these accounts to our readers, we will describe the cases or content in which they have been involved (misinformation, attacks, or misleading narratives) and the arguments that led us to conclude why it is a fake account (identity impersonation, use of stolen photos, username changes, follower purchases, etc.).

An investigation on an isolated fake account of interest is this.

Fake Account Networks: In this category, we consider networks of three or more troll accounts, bots, or bot-like accounts involved in the coordinated promotion of misinformation, participating in stigmatization campaigns, or spreading toxic content (discriminatory, cyberbullying, etc.). When explaining our findings about these account networks to our readers, we will describe the cases or content in which the network has been involved and show at least five common patterns demonstrating that the accounts are part of a specific influence operation. In the case of networks operated by identifiable real people (influencer networks, astroturfing networks), the privacy of the accounts forming them will be guaranteed unless one has more than 5,000 followers (influential accounts) or is a public figure.

An example of an investigation on fake account networks is this.

Coordinated Campaigns

We are also interested in investigating coordinated campaigns on social networks, organized and systematic efforts aimed at manipulating public opinion in favor of a specific narrative in a non-spontaneous manner. These campaigns usually involve multiple accounts participating in one or several digital platforms and can amplify misinformation, rumors, propaganda, or true information promoting certain narratives.

An example of an analysis of coordinated campaigns can be consulted here.

Content Created with Generative Artificial Intelligence (AI)

Sometimes, we label content created with generative AI that may confuse our readers. By content created with generative AI, we refer to texts, images, audios, and/or videos produced by artificial intelligence algorithms. Although this technology can have positive applications, in the context of misinformation, it facilitates the creation of toxic and misleading content with a realistic appearance. This type of content can include fake news, deepfakes, and other types of digital manipulation that deceive information consumers, erode trust in the media, and amplify misinformation campaigns.

An example of content created with generative AI studied by CFN is here.

Propaganda

When identifying content as propaganda, we point out that it is a biased message aiming to influence public opinion and promote a particular agenda, although it may not necessarily be strictly “false” or “misleading.” Often, we find propaganda presented in a persuasive and manipulative manner, in the form of hyperpartisan content. We label this content as “propaganda” because we need to document it, but we cannot present it without identification, as our users might confuse it with regular news.

This video shows an example of the use of this category.

Disinformation (in general)

Sometimes, some speeches or multimedia pieces contain a complex mix of false, misleading, rumors, and manipulations designed to feed a specific narrative, especially when analyzing content from radio and television. In such cases, when we cannot describe them using a single label like “false” or “misleading,” we prefer to mark the content with the umbrella category “disinformation,” to avoid presenting it without identification and confusing it with regular news.

Here is an example of how we use this category.

Source Policy

At Cazadores de Fake News, we consider that we are in a polarized context, and therefore, we address misinformation by focusing on tracking and debunking hoaxes, rumors, or any misleading content. For this, we use evidence found in open sources, such as social networks, websites, online photos, and digital archives, that debunk each hoax or misinformation on their own. We always reference and add links to two or more primary evidence, allowing anyone to verify our arguments and replicate the process.

Although we prefer to work with this type of digital evidence, we occasionally consult relevant human sources to better understand some cases, events, or phenomena. These consultations help us obtain information not available in open sources or reorient investigations. In these cases, the information provided by these sources is treated as secondary and will not influence the final conclusions of each verification, which will depend on our investigations.

In exceptional cases where open sources do not provide enough evidence and the case is of high relevance (such as rumors circulating during the Covid-19 health emergency), we rely on human sources to help verify high-interest cases. When this occurs, we ensure to obtain information backed by at least two reliable and independent sources before issuing our conclusions.

When citing experts, we will indicate their position and experience to clarify why their assessment is relevant to the content. Cazadores de Fake News does not use anonymous sources. However, if a source prefers not to be identified, their testimony cannot debunk the investigated misinformation, although it may be used to deepen each related investigation, always confirming the various arguments with evidence found in open sources or verified by at least two reliable and independent sources.

If the source may have particular interests (biased source), these interests will be revealed so the reader can understand that these could influence the accuracy of the provided evidence.

Confidentiality and Exposure Policy

We hide the identity of sources and users who distribute misleading content because we consider that most of those who amplify it are not its origin but rather victims of misinformation. However, we do not hide the identity of sources or users who have misinformed more than five times in a year and/or who are public figures, radio, TV, or streaming channel programs, news portals, or social network profiles with a community of more than 5,000 followers.

In photos or videos, we always hide the identity of minors, people not directly linked to the origin or spread of misinformation, and we do not publish personal information (personal documents, phone numbers, or addresses) of any affected individuals.

An example of content where an involved person’s identity is protected is this; another where someone’s identity is shown is this.

How do we handle the use of artificial intelligence at Cazadores de Fake News?

At Cazadores de Fake News, we use artificial intelligence (AI) ethically, responsibly, and transparently. AI amplifies the voice of our journalists and researchers, collaborating in creating high-quality content but without replacing them. We clearly identify any significant AI contribution to keep our readers informed. We promote a culture of learning and respect but also ethical AI use. For more information, we invite you to read our AI Manifesto.

We correct our mistakes

We believe in the importance of transparency and value the trust of our readers. Therefore, if we identify any errors or inconsistencies in our articles, we will make the corresponding correction and leave an explanatory note with details about the correction and its date at the top of the article. If there is a social media post reflecting an error, we will delete that post and share a new one with the corrected information. With these measures, we aim to prevent the further spread of any previously identified error, ensuring the transparency and accuracy of our work and mitigating the impact of the original version.

When an update is made, it is identified in the initial field of the summary with the date of the update and in which section of the article the new data were added. If there is a category change or it is necessary to make any adjustment or clarification to maintain the article’s context, an update will be included in the content summary. An example can be read here.

If the update contains new information unrelated to the general structure of the article, a new section or subtitle will be created at the beginning of the article where the new information will be presented. An example of this situation can be seen here.

If we detect a typographical, spelling, grammatical, or punctuation error, we may correct it without prior notice, as long as it does not affect the correct understanding of the information or substantially alter the general meaning of the article.

If you detect any errors or want to make any corrections to our investigations, you can write to us at correcciones@cazadoresdefakenews.info.


This methodology is updated as of July 26, 2024, but we are aware that it will be necessary to constantly improve it according to our observations, needs, and lessons learned. Our previous methodology, valid until July 26, 2024, is archived here (in spanish).

Cazadores de Fake News investiga a detalle cada caso, mediante la búsqueda y el hallazgo de evidencias forenses digitales en fuentes abiertas. En algunos casos, se usan datos no disponibles en fuentes abiertas con el objetivo de reorientar las investigaciones o recolectar más evidencias.