September 8, 2024
Search
Close this search box.
Search
Close this search box.
September 8, 2024
Search
Close this search box.

Linking Northern and Central NJ, Bronx, Manhattan, Westchester and CT

Warfare has evolved significantly over the years. The unfortunate truth is that people often channel much of their creativity into fighting, and in the information age, war becomes as much about information as it is about guns and ammo. This week’s “Ungarbled Tech” will be one of the more serious ones we’ve written.

False information online essentially falls into two categories: misinformation and disinformation. The difference lies in intent. There’s a significant difference online between spreading false information due to ignorance and disinforming people to serve an agenda. This type of information warfare has existed for a long time, and we have written about it before.

Recent versions of this issue revolve around social media and the abuse of online anonymity.

As hard as we try, we cannot ignore the war in Gaza. With that in mind, Hamas bots have become the center of the conflict for much of the world. With even the largest media outlets using social media for information and most people getting at least part of their news from these platforms, Hamas and other bad actors have leapt into the information war. Propaganda’s newest descendant is the vast ocean of false information presented as fact by people across the globe. They use every technique in the book to convince people that they are spreading validated, true news, but are in fact spreading lies until they become truly viral. Now, a conflict in public perception has risen that rivals boots-on-the-ground fighting.

Before moving into bots and the new dangers, what are the existing misinformation and disinformation campaigns? Where do they come from? Most importantly, who is spreading them? Well, here is the answer nobody wants to hear: everyone.

There’s a difference between maliciously lying about a topic you have no business weighing in on and sharing something you were misled into treating as fact. Recently, around GCG, we shared an article published by the Center for Strategic and International Studies (CSIS). This organization, existing for more than 60 years, is one of the foremost experts on digital and physical security for governments and citizens alike. In one of their recent posts, since the war began, they highlighted seven instances on [Platform]/X (formerly Twitter) where misleading videos or information have gone viral, falsely portraying the current conflict.

These tweets are a terrific example of the contrast between maliciously spreading false information and relaying something by accident. Everything from video game footage being passed off as real-world video, footage from other conflicts being repackaged as Gaza or Israeli cities, to its most glaring disinformation: a “report” from the misspelled “Jerusalam” Post claiming to be the real Israeli news organization the Jerusalem Post, stating that Prime Minister Bibi Netanyahu was admitted to a hospital.

These tweets show a glaring hole in verified information on X that exists on all social media platforms. Even though some try better than others to keep dangerous lies away from their users, the chaos and distrust that came from this campaign has allowed Hamas and others to take advantage of the newly minted ambiguity of information to add the newest in lying technology, bots.

We read an article at GCG recently on a report published by the Foundation for Defense of Democracies (FDD). The FDD, existing since 9/11, has staff with a wide range of expertise from defense to journalism, investigating issues of security and foreign policies. Their article focuses on two critical instances of disinformation since the conflict began. The first is regarding the explosion in the Al Ahli hospital in Northern Gaza. The explosion took place late on October 17 in Northern Gaza. As information poured in, global media immediately echoed Hamas’ claims that the Israeli Air Force had bombed the hospital, killing hundreds seeking shelter from the fighting. The news relayed Hamas’ claims of over 500 people dying in the explosion that hit the hospital, and it took hours before any conflicting information came through. Eventually, more conflicting information emerged, including video footage showing rockets fired from within Gaza, the Israeli army verifying that no planes were in the air during the explosion, and international intelligence agencies confirming they had evidence contrary to the initial story.

Death tolls dropped, blame shifted, and the hospital bombing became a misfire that landed in the hospital parking lot. The damage was done, however. In this example, The New York Times published a fully revised statement on November 3, two weeks after Israel was blamed. Hamas also knows that the world, including Israel, understands how tragic civilian deaths are, even during wartime, and has taken the initiative to use that empathy to its advantage to sully the name of the IDF and make the world forget the efforts the Israeli army makes to minimize even the inevitable harm to civilians.

Disinformation thrives in the relatively brief window from an incident to verification. It only takes a few people to repeat an unsupported claim for a lie to cause irreparable damage. Newer technology has made those few people far easier to come by, by inventing them.

Online bots, or bot-farming, is a simple concept. A single person creates a program to make new accounts and spam false or negative information online. The targets of these range the entire breadth of the internet. Everyone and everything can fall victim to bot accounts. Movie fans may notice films with horrible reviews before trailers are released or see YouTube videos with repeated comments running off the screen. These are more obvious examples, but the internet is seldom so obvious. By inventing a small group of people to cite unverified claims, the question of whether something is true becomes irrelevant. Maybe the first reader will notice that a claim is unverified, but after 10 or 10,000 people read the same claim, the truth is replaced by blind trust.

Since the war began, most of the disinformation has come from a mere 67 Hamas accounts. This small number has been able to lie to the whole world and help spread disinformation by painting an untrue source with a friendlier digital face. This isn’t the first time we have seen this; after most coronavirus disinformation was spread by only 12 accounts online.

Hamas’ campaign to smear Israel and Jews to garner support has employed these bot accounts to repeat unverified information, giving it more credit before actual users see it online. After a while, online readers will either believe Hamas’ lies outright or be so bombarded with untrue or unverified claims they will doubt true claims as if they were lies. Either way, it’s a loss for Israel, the Jewish people, and the world.

I couldn’t leave you with just scaring you and moving on, so here are the best ways to find misinformation online. We have spoken about this before, but an easy way to verify information is by its source. My high school biology teacher would laugh at this, but I, for example, am not qualified to give medical advice to anyone. Therefore, if I do try to give medical advice, I would need to cite someone qualified to have an opinion on medicine.

When you are online, check to see who you are listening to and if their claimed qualifications can be verified. Do they have a background in intelligence that you can find elsewhere? Have reputable sources cited them before? Can their claim be easily verified? Asking yourselves these questions can save you from believing misinformation or even spreading it yourself.

Bot accounts have a similar screening process. Is the name spelled correctly? Would an Israeli news agency misspell Jerusalem on their social media accounts? If there is an image, try to find flaws in the picture. An AI-generated image may give someone six fingers instead of five or put a menorah in a window even though it should be depicting Rosh Hashanah. Imperfections like these may last long enough to harm people so by the time they are under scrutiny, the world moves on, and the damage is done.

A personal favorite is checking the age of an account. The most established media outlets will often post their findings on YouTube or Instagram, both of which are easy to check since it is listed in the description. In the description of a YouTube account, at the bottom, YouTube tells you the date when that account was created. Instagram accounts list their age under “About this Account” and even include its verification date and home location. If an account seems too new to be real, it most likely is. If an account for a country is based in a different country, it should not be trusted. If you are looking at something bigger than one person, like a country or news agency, it is safe to say they would not have made a recent account on an established platform. For example, Israel’s YouTube account is from 2008, with an Instagram from 2012. It should strike you as odd if a country only joins well-established media after a decade. That likely means it is not a real account. If an account is brand new, say the past month, but it has an inhuman amount of activity, that may even be a bot account. Other things to look out for are posting the same text hundreds of times or a name that seems too generated to be real.

Hopefully, all of these tools can help you be informed and safe online. Once again, we are here for our friends, both in uniform and not, as we get through this time. We are hoping for your safety, both online and in the real world. We also want to highlight our former interns, as well as hundreds of thousands of others, who have been putting their lives on hold to protect the State of Israel during this difficult time. עם ישראל חי


By Mendy Garb and Shneur Garb

Shneur Garb is the founder of the Garb Cloud Consulting Group out of Teaneck. Mendy Garb is the COO and is based in Herzliya, Israel.

Leave a Comment

Most Popular Articles