methodology

How we verify
information

This is how we investigate, which cases we choose to verify and why. No shortcuts.

Updated February 13, 2026 · Santiago de Chile
METHODOLOGY

Cazadores de Fake News (CFN) is an association founded in 2019 and legally registered in Santiago, Chile — as a non-governmental organization — on December 16, 2021, under the name «ONG Cazadores de Fake News» (RUT 65.206.315-2). It was created to collaboratively analyze disinformation and propaganda spread in Venezuela and Latin America, as well as to engage citizens in organized efforts to combat the disinformation phenomenon and to promote digital rights, freedom of expression, and freedom of the press across Latin America. Our verification process, criteria, and principles are summarized in this methodology.

01

What cases do we verify, and why?

All cases we verify meet at least one of the following key criteria:

1

Virality

We prioritize cases that are viral on at least two social media platforms, that were received through our Central Cazadores chatbot and are also circulating on another social network, or that have been broadcast by mass media outlets such as radio, press, or television.

2

Relevance

We select cases of high relevance to Venezuela or Venezuelans. We prioritize those related to critical issues such as public health, elections, civil coexistence, discrimination, natural disasters, and influence operations on matters of public interest, or those targeting journalists, media outlets, or human rights defenders.

3

Verifiability

We only verify claims that can be confirmed or refuted using existing, reliable sources and tangible data. We do not verify opinions or predictions.

We will not publish debunks of cases that are not viral or relevant, in order to avoid unnecessarily amplifying their public exposure and overstating the significance of both the hoax and the actors who helped spread it. Our organization only checks data and facts that can be fact-checked, and does not verify opinions, analyses, or predictions, although we may point out falsehoods that underpin a given case.

At Cazadores de Fake News, we believe that civil society should receive accurate and transparent information, free of bias, in order to strengthen its resilience against disinformation. For this reason, we address disinformation regardless of its origin or who it affects. We consider it essential to combat it whenever it has the potential to cause harm, irrespective of the political spectrum or the power actors it comes from or targets.

Our philosophy on fighting disinformation — wherever it comes from — is detailed in a document we call the Cazadores Manifesto.

02

How do we carry out our fact-checks?

Our fact-checks consist of four steps:

1

Detection

Members of the organization’s editorial team continuously monitor posts on social media, content websites, instant messaging groups, and media outlets such as radio and television. Reports submitted by members of our community through the «Central Cazadores» chatbot, our Telegram groups, or received via social media, instant messaging, or email at prensa@cazadoresdefakenews.info are also evaluated. Before launching an investigation into each case, an editor will assess its virality, relevance, and verifiability.

2

Investigation

A fact-checker will research the origin of each case, looking for primary sources, understanding the context, and finding evidence to debunk (or confirm) it — conducting open-source searches or, when necessary, using advanced data analysis techniques and multimedia content tools. At the end of this phase, the fact-checker will produce a first draft.

3

Writing and editing

The first draft will be reviewed by an editor, who will confirm each of the arguments put forward by the fact-checker, add further arguments if necessary, and assess whether it meets the organization’s style and quality standards.

4

Publication

Once the previous step is complete, the final content will be published on the website cazadoresdefakenews.info and on the organization’s social media channels. The resulting article will present the evidence supporting each finding accurately, avoiding evaluative language or phrasing that could be interpreted as a value judgment, and will include the hyperlinks necessary for readers to independently verify the sources and data cited.

03

How do we classify our fact-checks and investigations?

To classify the cases we investigate and verify, we use several categories that allow us to assess and clearly communicate the nature of the content evaluated:

False

Entirely false information

When the information is entirely false — a completely fabricated claim, content published by an account impersonating another, a forged document, a photo or video whose original meaning has been distorted through digital manipulation, content created entirely in a synthetic way, or a social media post for which no evidence exists, that has never been archived, backed up, or received any interaction that would confirm it was ever made. It is identified with a red label and the text «False.»

See example → · See example → · See example →

Misleading

Decontextualized information

When the information contains a legitimate or verifiable element, but is presented out of context or in a misleading way. It is identified with a purple label and the text «Misleading.»

See example → · See example → · See example →

Context

Information requiring context

When we want to provide more information about a matter of public interest where important data needs to be presented, but not all details are known. This includes situations with two inconclusive — even contradictory — versions of an event, disinformation cases, or disinformation narratives that mix false content, misleading content, rumors, and information that cannot be verified at the time and is too complex to be classified simply as «False» or «Misleading.» It is identified with a blue label and the word «Context.» This label is an update of the former «What We Know» category, which was in use until February 2026.

See example → · See example → · See example →

True

Confirmed veracity

In special cases where it is necessary to reaffirm the veracity of content that may be confusing or that was previously, in a different context, classified as «False» or «Misleading.» It is identified with a green label and the text «True.»

See example → · See example → · See example →

04

What measures do we take to identify disinformation pieces and other information disorders so they are not confused with authentic content published by regular users?

When we document content that is not authentic, nor published by regular users — generated by fake accounts, in coordinated campaigns, or created with generative Artificial Intelligence (AI) — as well as other information disorders — hate speech, discriminatory content, propaganda, disinformation, doxxing — we will label the multimedia piece with a stamp describing the type of content shown. This allows users to immediately recognize that the material is disinformative or constitutes another type of information disorder, regardless of whether it contains false, misleading, or true information.

In all cases, the stamps are displayed in yellow, with the word corresponding to the relevant category, a warning triangle, and are placed over the piece in a way that covers at least 20% of its surface, so they are easy to spot. See an example of how these stamps are used →

05

Identification criteria for disinformation pieces and other information disorders

Fake accounts

At Cazadores de Fake News, we consider fake accounts (trolls, bots, and accounts with bot-like behavior) to be digital assets regularly used to spread disinformation in the context of influence operations. When investigating fake accounts, we take two approaches:

Isolated fake accounts

Under this category we include, for example, fake news outlets or isolated troll accounts that spread disinformation, participate in stigmatization campaigns, or distribute toxic content (discriminatory material, cyberbullying, etc.). We will investigate accounts in this category if it can be proven that they have been involved in more than five cases over the course of a year. When presenting our findings about these accounts to readers, we will explain the cases or content they were involved in promoting (disinformation, attacks, or disinformation narratives) and the arguments that led us to conclude why the account is fake (identity impersonation, use of stolen photos, username changes, purchase of followers, etc.).

See investigation →

Fake account networks

Under this category we include networks of two or more troll accounts, bots, or bot-like accounts that have been involved in the coordinated promotion of disinformation, that participate in stigmatization campaigns, or that distribute toxic content (discriminatory material, cyberbullying, etc.). When presenting our findings about these networks to readers, we will describe the cases or content the network was involved in promoting and will show at least five common patterns that demonstrate the accounts are part of a specific influence operation. If the network consists of accounts operated by real, identifiable individuals (influencer networks, astroturfing networks), the privacy of the accounts involved will be protected, unless any account has more than 5,000 followers (influential accounts) or belongs to a public figure.

See investigation →

Coordinated Campaigns

We are also interested in investigating coordinated campaigns on social media — organized, systematic efforts to manipulate public opinion in favor of a specific narrative in a non-spontaneous way. These campaigns typically involve multiple accounts across one or more digital platforms and can amplify disinformation, rumors, propaganda, or true information that promotes certain narratives.

See an example of a coordinated campaign analysis →

Content created with Generative Artificial Intelligence (AI)

We sometimes label content created with generative AI that may confuse our readers. By generative AI content, we mean texts, images, audio, and/or videos produced by artificial intelligence algorithms. While this technology can have positive applications, in the context of disinformation it facilitates the creation of toxic and deceptive content with a realistic appearance. This type of content can include fake news, deepfakes, and other forms of digital manipulation that mislead information consumers, erode trust in the media, and amplify disinformation campaigns.

See an example of generative AI content studied by CFN →

Propaganda

When we identify content as propaganda, we are flagging it as a biased message intended to influence public opinion and promote a particular agenda, even if it is not strictly «false» or «misleading.» We often encounter propaganda presented in a persuasive and manipulative way, in the form of hyper-partisan content. We label this content as «propaganda» because we need to document it, but cannot present it without identification, as our users might otherwise mistake it for regular news.

See an example of how we use this category →

Disinformation (general)

Occasionally, certain speeches or multimedia pieces contain a complex mix of false claims, misleading content, rumors, and manipulations designed to feed a specific narrative — especially when we analyze content extracted from radio and television. In those cases, when we cannot describe them using a single label such as «false» or «misleading,» we prefer to mark the content with the umbrella category «disinformation,» in order to avoid presenting it without identification and having it mistaken for regular news.

See an example of how we use this category →

06

Source policy

At Cazadores de Fake News, we recognize that we operate in a polarized context, and therefore approach disinformation topics by focusing on tracking and debunking hoaxes, rumors, and any disinformation content. To do so, we rely on evidence found in open sources — such as social media, websites, online photos, and digital archives — that debunk each hoax on their own. We always reference and include links to two or more primary pieces of evidence, allowing anyone to verify our arguments and replicate the process.

While we prefer to work with this type of digital evidence, we occasionally consult relevant human sources to better understand certain cases, events, or phenomena. These consultations help us obtain information not available in open sources or to redirect investigations. In such cases, the information provided by these sources is treated as secondary and will not influence the final conclusions of each fact-check, which will depend on our own investigations.

In exceptional cases where open sources do not provide sufficient evidence and the case is of high relevance, we rely on human sources to help verify cases of significant public interest. When this occurs, we ensure that the information is backed by at least two reliable and independent sources before reaching our conclusions.

When citing experts, their position and relevant experience will be stated, so it is clear why their assessment is pertinent. Cazadores de Fake News does not use anonymous sources. However, if a source prefers not to be identified, their testimony cannot serve to debunk the disinformation being investigated, though it may be used to deepen each related investigation. If a source may have particular interests (a partial source), those interests will be disclosed so that readers can understand they may influence the accuracy of the evidence provided.

07

How do we handle the use of artificial intelligence at Cazadores de Fake News?

At Cazadores de Fake News (CFN), we use artificial intelligence (AI) in an ethical, responsible, and transparent manner. AI amplifies the voice of our journalists and researchers, collaborating in the creation of high-quality content, without replacing them. We clearly identify any significant AI contribution in order to keep our readers informed. We promote a culture of learning, respect, and ethical use of AI.

For more information, we invite you to read our AI Manifesto →

This methodology was last updated on February 13, 2026, but we are aware that it will need to be continuously improved based on our observations, needs, and lessons learned. Our previous methodology, which was in effect until July 26, 2024, is archived here.

Cazadores de Fake News investigates each case in detail through the search
and discovery of digital forensic evidence in open sources. In some cases,
data not available in open sources is used in order to redirect investigations
or gather additional evidence.

— Cazadores de Fake News