Trending February 2024 # How To Block Posts Based On Language On Mastodon # Suggested March 2024 # Top 3 Popular

You are reading the article How To Block Posts Based On Language On Mastodon updated in February 2024 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 How To Block Posts Based On Language On Mastodon

Mastodon offers a lot of options on its platform to give you better control over what’s visible to you and what may interest you. If you’ve been actively using Mastodon only to see posts in languages you don’t know, there’s a better way to browse through content on the platform. To make sure you only see posts created in languages you know, Mastodon helps you effectively block all the other languages from your timelines. 

In this post, we’ll explain to you how you can block posts made in certain languages from showing up on Mastodon, what happens when you block languages, and how to unblock them on your account. 

How to block posts from certain languages on Mastodon

Mastodon lets you block posts from specific languages by allowing you to choose your preferred language to view posts on the platform. When you select your preferred language, all the languages that you didn’t select will be blocked and posts with the unselected languages won’t show up inside your public timelines including the Local and Federated timelines. If you follow people who occasionally post in a language that you don’t speak, you may see still these posts on your Home timeline. 

Inside the Preferences screen, scroll down to the Filter languages section. Under this section, you’ll see a list of all the languages that are supported on Mastodon. To block certain languages (which are likely going to be large in numbers), you need to select the languages that you prefer to see on Mastodon instead.

This way, only posts from the languages you select will be visible inside public timelines and all the other languages that you don’t select will get blocked automatically. So, you’re essentially selecting a language you know and want to view content from instead of actively blocking a language you don’t want to see. 

From the Filter languages section, check the boxes adjacent to your preferred language (the ones you want to view posts from). For instance, if you don’t read or speak any other language except English, you’ll select English from the Filter languages section. This way, all the other (unselected) languages will remain blocked on public timelines. 

Mastodon will now only show posts that were shared using your preferred languages and block all the other languages from appearing on your feeds. 

What happens when I block posts based on language

As we have explained above, the only way to block posts uploaded in certain languages is by selecting the language you prefer to use. So, when you use the aforementioned guide to block languages, Mastodon will only show posts that were uploaded in the language that you selected to view. Posts shared in all other languages will stop appearing inside the public timelines on the platform, meaning you won’t see posts made in an unselected language inside the Local and Federated sections. 

This setting, however, isn’t applied to your Home timeline which may continue to show posts in all languages since these posts are from people you follow on Mastodon. Since you may also follow certain hashtags, even posts with the followed hashtag that were uploaded in a blocked language will also appear inside the Home timeline. 

Mastodon shares that “language detection can be very imprecise”, even if you select languages to filter posts with. Because of this, you may miss viewing posts with your preferred language or some posts with blocked languages may continue to appear on your timelines. 

How to unblock a preferred language on Mastodon

Inside the Preferences screen, scroll down to the Filter languages section. From here, check the box adjacent to the language you want to unblock. If you have more than one language you want to unblock, check those boxes as well. 

Mastodon will now update your preferred set of languages to only show them on your public timelines. 

How to turn off the language block on Mastodon

When you select your preferred languages, Mastodon will stop any post that’s shared with your undesired language from appearing on your public timelines. If you’ve been missing out on important posts because of this block, you can turn off the language block entirely so that you see all unfiltered posts on your Local and Federated feeds at all times. 

Inside the Preferences screen, scroll down to the Filter languages section. From this section, uncheck all the boxes next to the languages you previously selected to view.

To turn off this filter completely, all the boxes adjacent to the languages list should be kept unchecked. 

That’s all you need to know about blocking posts based on languages on Mastodon. 

You're reading How To Block Posts Based On Language On Mastodon

Mastodon: How To Search For Posts And People

There are various things that you can do on Mastodon. It can seem a bit overwhelming, but the key is to learn what’s important to you first. Once you learn what you need to do, you can little by little learn everything else. A basic feature you can start with is learning to search for content and other users. You might be inclined to use Twitter methods on Mastodon, but is it the way to go?

How to Use the Search Feature on Mastodon

Let’s say that you’re looking for another user. The good news is that you will no longer need the person’s domain name. If you have it, great! but it won’t be mandatory. You shouldn’t have a problem finding the person if you only have the username or display name.

When it comes to searching for posts, you’ll need to use hashtags to find those. The idea behind using hashtags is to let users be in control when it comes to others finding their posts.

If you want to find another Mastodon user, you’ll need to type the following: @[email protected]. This is what it would look like if you added the domain, but the user will still show up in the search results even if you don’t add it. You can also find a user using their display name. The display name would be like their real name. Remember that if you’re going to search for someone using their display name, there is no need to put the @ at the beginning. You can always try the other if one search option doesn’t work.

How to Search for Content on Mastodon Using Hashtags

When you find a post you like, there are various things you can do with it. Below the post, you’ll see multiple options to choose from. You can do things such as reply (the arrow pointing left), Boost (arrows that form a box, Star (mark it as your favorite), Bookmark, and share, and then there are the options behind those three dots.

You can use other options such as:

Expand this post

Open original page

Copy link to post



Direct Message



Filter this post


Block domain

If you happen to have the URL of a port, you can enter it in the search bar and find the post that way. To your left, you’re going to see three useful options if you want to explore new posts. For example, you can go to the Explore, Local, and Federated options. You’ll find many public posts that keep you busy for a while here.

If you want to look at the most recent posts from other users on your server, this is the place to go. The Federate option does the same thing but with a plus. It also shows you posts from other servers it finds out about. But, if the posts you want to see include the ones that are gaining popularity on your server and others, too, this is your section.

Further Reading

There are other things you can do on Mastodon. For example, if you need help deciding on something, you can always create a poll to help you decide. If you see a post you want to keep, you can always pin the post, so it’s always easy to find.


Create Ios And Android App On Blockchain Technology Based

Focus on the Interface

Blockchain development may be used to make all kinds of invaluable iOS and Android programs, but for all these apps to achieve their whole potential, programmers need to stay concentrated on the interface. Not considering the consumer’s experience throughout the evolution process could decrease the amount of effect of this program.

Is your front-end programming language functional? Is the blockchain program that’s vital for operating this speech contained? Is the correct application management being included during the evolution procedure?

All of these are questions which have to be answered so that the program developer can produce an excellent design for your own user interface. Identifying the correct analytics can also be crucial. It also gets more difficult for programmers to spot the ideal system. Assembling an admin console is an integral measure of this procedure too.

Take a Good Look at the Value of Architecture

If a blockchain program has been made to be employed within an iPhone or Android apparatus, programmers must take a better look at the planned architecture so they can prevent common mistakes. Alas, a lot of aspects are not always believed, and they frequently involve additional procedures that may add extra time to the procedure.

By way of instance, individuals that are considering developing a hybrid program will have to get certain permissions first. The more programmers understand the needed parameters which were established, the greater their odds of producing a program which will be helpful to people who rely on blockchain tech every day.

The chips, size, operational and memory systems have to be configured correctly, and such configurations all fall beneath the architectural umbrella.

Proper Machine Use

If it’s the programmer decides to select Ethereum, Quorum or another platform completely, it compels them to take some opportunity to examine the benefits and pitfalls of each, particularly when blockchain technology has been used.

Recognizing the Importance of Consensus Mechanisms

Is your program being decentralized? This is only one of the chief questions that have to be answered before this procedure can be finished. An iOS or Android program that’s reliant on blockchain desires a consensus mechanism to operate in the right way once it’s established.

With no decentralization along with also a consensus mechanism, lots of the normal issues and problems that arise aren’t as simple to address. The machine used to link and offer a connection requires a suitable consensus. With no consensus mechanism, the system’s capability to perform each the vital tasks is badly compromised.

Identification of Aims

Last but definitely not least, programmers must identify their aims. What’s the goal they’re trying to do by creating this program? What job is blockchain technology likely to perform with? Are all the essential blockchain growth fundamentals being adhered to? Is the problem that will be solved being created absolutely obvious?

Answering these questions in a timely fashion permits a programmer to choose the right strategy. The very same principles that apply throughout the development of any other program nevertheless come into play when blockchain programs are being designed for iOS and Android users. The issue that’s being solved with the introduction of this program has to be clearly defined.

Are the problems being caused by information reduction? What tools are being supplied that weren’t previously offered? For the most from this evolution process, the problems and the related goals have to be analyzed carefully.

6 Youtube Seo Tips Based On Google’s Published Paper

YouTube’s recommendation engine is one of the most successful innovations Google has ever built. A staggering 70 percent of watch time on YouTube is driven by YouTube’s own recommendations.

Despite this, the SEO industry tends to focus on sayings like “YouTube is the world’s second largest search engine,” and emphasize ranking in YouTube search results or getting YouTube listings in Google search results.

Especially surprising is the fact that YouTube has actually published a paper (The YouTube Video Recommendation Engine) describing how its recommendation engine works.

Yet this paper is rarely referenced by the SEO industry.

This article will tell you what’s in that paper and how it should impact the way you approache SEO for YouTube.

1. Metadata

To this day, metadata remains far more important for SEO on YouTube than it is for search results in Google.

While YouTube is now able to create automated closed captions for videos and its capacity to extract information from video has improved dramatically over the years, you should not rely upon these if you want YouTube to recommend your video.

YouTube’s paper on the recommendation algorithm mentions that metadata is an important source of information, although the fact that metadata is often incomplete or even entirely missing is an obstacle that their recommendation engine is designed to overcome as well.

To avoid forcing the recommendation engine to do too much work, make sure that every metadata field is populated with the right information with every video you upload:


Include your target keyword in the video title, but make sure the title also grabs attention and incites curiosity from users.

Attention-grabbing titles are arguably even more important on YouTube than traditional search, since the platform relies more heavily on recommendations than search results.


Include a full description that uses your keyword or some variation on it, and make sure it is at least 250 words long.

The more useful information you include here, the more data YouTube has to work with, allowing you to capitalize on the long tail.

Include the major points you will cover in the video and the primary questions that you will address.

Additionally, using descriptions that relate to other videos, as long as it is appropriate from the user perspective, may help you turn up in the recommendations for those videos.


Keyword tags still matter on YouTube, unlike the meta keyword tag for search engines, which is completely defunct.

Include your primary keyword and any variations, related topics that come up in the video, and other YouTubers you mention within the video.


Include your video in playlists that feature related content, and recommend your playlists at the end of your videos.

If your playlists do well, then your video can become associated with keeping users on YouTube longer, leading to your video showing up in recommendations.


Use an eye-catching thumbnail. Good thumbnails typically include some text to indicate the subject matter and an eye-catching image that creates an immediate emotional reaction.

Closed Captions

While YouTube’s automated closed captions are reasonably accurate, they still often feature misinterpretations of your words. Whenever possible, provide a full transcript within your metadata.


Use your keyword in your filename. This likely doesn’t have as much impact as it once did, but it certainly doesn’t hurt anything.

2. Video Data

The data within the video itself is becoming more important every day.

The YouTube recommendation engine paper explicitly references the raw video stream as an important source of information.

Because YouTube is already analyzing the audio and generating automated transcripts, it’s important that you say your keyword within the video itself.

Reference the name and YouTube channel of any videos you are responding to within the video as well in order to increase the chances that you will show up in their video recommendations.

Eventually, it may become more important to rely less on the “talking head” video style. Google has a Cloud Video Intelligence API capable of identifying objects within the video.

Including videos or images within your videos referencing your keywords and related topics will likely help improve your video’s relevancy scores in the future, assuming these technologies aren’t already in motion.

Keep your videos structured well and not too “rambly” so that any algorithms at play will be more likely to analyze the semantic content and context of your video.

3. User Data

Needless to say, we don’t have direct control over user data, but we can’t understand how the recommendation engine works or how to optimize for it without understanding the role of user data.

The YouTube recommendation engine paper divides user data into two categories:

Explicit: This includes liking videos and subscribing to video channels.

Implicit: This includes watch time, which the paper acknowledges doesn’t necessarily imply that the user was satisfied with the video.

To optimize user data, it’s important to encourage explicit interactions such as liking and subscribing, but it’s also important to create videos that lead to good implicit user data.

Audience retention, especially relative audience retention, is something you should follow closely.

Videos that have poor relative audience retention should be analyzed to determine why, and videos with especially poor retention should be removed so that they don’t hurt your overall channel.

4. Understanding Co-Visitation

Here is where we start getting into the meat of YouTube’s recommendation engine.

The YouTube paper explains that a fundamental building block of the recommendation engine is its ability to map one video to a set of similar videos.

Importantly, similar videos are here defined as videos that the user is more likely to watch (and presumably enjoy) after seeing the initial video, rather than necessarily having anything to do with the content of the videos being all that similar.

This mapping is accomplished using a technique called co-visitation.

The co-visitation count is simply the number of times any two videos were both watched within a given time period, for example, 24 hours.

To determine how related two videos are, the co-visitation count is then divided by a normalization function, such as the popularity of the candidate video.

In other words, if two videos have a high co-visitation count, but the candidate video is relatively unpopular, the relatedness score for the candidate video is considered high.

In practice, the relatedness score needs to be adjusted by factoring in how the recommendation engine itself biases co-visitation, watch time, video metadata, and so on.

Practically speaking, what this means for us is that if you want your video to pick up traffic from recommendations, you need people who watched another video to also watch your video within a short period of time.

There are a number of ways to accomplish this:

Creating response videos within a short time after an initial video is created.

Publishing videos on platforms that also sent traffic to another popular video.

Targeting keywords related to a specific video (as opposed to a broader subject matter).

Creating videos that target a specific YouTuber.

Encouraging your viewers to watch your other videos.

5. Factoring In-User Personalization

YouTube’s recommendation engine doesn’t simply suggest videos with a high relatedness score. The recommendations are personalized for each user, and how this is done is discussed explicitly within the paper.

To begin, a seed set of videos is selected, including videos that the user has watched, weighted by factors such as watch time and whether they thumbed-up the video, etc.

For the simplest recommendation engine, the videos with the highest relatedness score would then simply be selected and included in the recommendations.

However, YouTube discovered that these recommendations were simply too narrow. The recommendations were so similar that the user would likely have found them anyway.

Instead, YouTube expanded the recommendations to include videos which had a high relatedness score for those would-be initial recommendations, and so on within a small number of iterations.

In other words, to show up as a suggested video, you don’t necessarily need to have a high co-visitation count with the video in question. You could make do by having a high co-visitation count with a video that in-turn has a high co-visitation count with the video in question.

For this to ultimately work, however, your video will also need to rank high in the recommendations, as discussed in the next section.

6. Rankings: Video Quality, User Specificity & Diversification

YouTube’s recommendation engine doesn’t simply rank videos by which videos have the highest relatedness score. Being within the top N relatedness scores is simply pass/fail. The rankings are determined using other factors.

The YouTube paper describes these factors as video quality, user specificity, and diversification.

Video Quality

Quality signals include:

User ratings.




Upload time.

View count.

The paper doesn’t mention it, but session time has since become the driving factor here, in which videos that lead to the user spending more time on YouTube (not necessarily on that YouTube video or channel) rank better.

User Specificity

These signals boost videos that are a good match based on the user’s history. This is essentially a relatedness score based on the user’s history, rather than just the seed video in question.


Videos that are too similar are removed from the rankings so that users are presented with a more meaningful selection of options.

This is accomplished by limiting the number of recommendations using any particular seed video to select candidates, or by limiting the number of recommendations from a specific channel.


The YouTube recommendation engine is central to how users engage with the platform.

Understand how YouTube works will dramatically improve your chances of doing well on the world’s most popular video site.

Take in what we’ve discussed here, consider giving the paper itself a look, and incorporate this knowledge into your marketing strategy.

More YouTube SEO Resources:

3 Ways To Change App Language On Any Android Phone

The latest Android 13 refined the user experience by topping it with useful features. One among them is the ability to change a specific app’s language without switching the entire system’s language. This feature goes by the name of ‘App Language‘ and is housed inside the settings app. Here’s how you can access and make the most out of it:

Samsung has incorporated the app language feature into it’s One UI 5, which is based on Android 13. Here’s how you can use it on your Samsung Galaxy phone.

1. Open the Settings app and scroll down to open General Management.

2. Next, tap on the App Languages option and then select the desired apps to change their language.

In the case of the Google Pixel Phone running on Android 13, here’s how you can change the app language for specific apps as per your liking.

1. Open the Settings app on your Pixel phone and scroll down to open the System settings.

3. On the next page, tap on the App Languages option and pick your desired app to change its language.

4. Finally, pick your preferred language to use inside the selected app without affecting other installed applications, and system language.

In the case of other phones running on Android 13, you can follow the steps mentioned below, to set a custom language for apps. Here we are using the IQOO Neo 6 running on Android 13.

2. Here, go to the Languages and Input option on the next page.

3. Further, tap on App languages and pick your desired app to change its language.

4. Finally, pick your preferred language to use inside the app.

If your smartphone hasn’t received the latest Android 13 update yet, you could change the language of some apps by accessing their web versions on the browser. Here’s how:

1. Access the app’s web version whose language you wish to change in your web browser (take YouTube, Reddit, or Gmail, for instance).

2. Tap the profile icon in the top-right corner to access the account settings.

4. Finally, pick your preferred language from the list of available options. Once selected, the interface shall update itself instantly in the new language.

Besides other third-party apps, you can change the language of all Google Apps at once on your smartphone by configuring its language setting online. Follow these steps to do the same:

3. Finally, pick your desired country and language to update all Google apps on your devices with the selected language.

Are you tired of listening to the robotic sound of your Google Assistant? You can change it and make it even more interesting by following our quick guide on changing Google Assistant Voice and Language.

A: You can access the Android 13’s app language feature to change it without changing your entire phone’s language.

A: Yes, follow the steps mentioned above for an easy fix.

A: Open the settings app on your Android phone and expand system settings to access languages on your device.

You might be interested in the following:

You can also follow us for instant tech news at Google News or for tips and tricks, smartphones & gadgets reviews, join GadgetsToUse Telegram Group, or for the latest review videos subscribe GadgetsToUse Youtube Channel.

Deepfakes May Use New Technology, But They’re Based On An Old Idea

“Fate has ordained that the men who went to the moon to explore in peace will stay on the moon to rest in peace.” 

Those were the words that Richard Nixon read on television in 1969 while breaking the terrible news to the nation that the Apollo 11 mission had failed and astronauts Neil Armstrong and Buzz Aldrin had perished while attempting the first lunar landing.

But only in an alternate reality. Nixon never had to utter those lines because Apollo 11 was a historic success, and Armstrong, Aldrin, and their pilot Michael Collins made it safely back to Earth. But a speech was prepared for then-President Nixon in case they did not. The short film In Event of Moon Disaster shows us how that scenario would have unfolded with an incredibly convincing deepfake of Nixon delivering the disastrous news. 

[Related: 7 easy ways you can tell for yourself that the moon landing really happened]

A deepfake is a combination of “deep,” meaning deep learning, and “fake,” as in fabricated. Together it’s a label for an audio or video clip that uses artificial intelligence to portray a scenario that never really happened. Usually, that consists of a person saying or doing something they never did, often without the consent of those portrayed, says Halsey Burgund, one of the directors of In Event of Moon Disaster.

While deepfakes are a recent development, they build upon a long and established line of distorted media that still exists as low-tech, impactful misinformation today. Although deepfake technology is evolving quickly, there are efforts to slow its dissemination. And while there are many malicious uses of deepfakes, there are some beneficial applications in areas like human rights and accessibility. An ongoing exhibit at the Museum of the Moving Image in New York City, Deepfake: Unstable Evidence on Screen, explores these themes with In Event of Moon Disaster as its centerpiece.

In Event of Moon Disaster is a deepfake of Richard Nixon telling the nation that Apollo 11 failed.

The difference between deepfakes and other misinformation

To make a deepfake of a person, creators have to train a computer by giving it lots of video, audio, or images of the “target,” the person whose image and voice you are trying to manipulate, and the “source,” the actor who is modeling the words or action you want the target to appear to say or do. To ace this, the computer uses a form of artificial neural networks, which are meant to function like a human brain trying to solve a problem by looking at evidence, finding patterns, and then applying that pattern to new information. Neural networks were first conceptualized in 1943, and can be used to do everything from writing a recipe to translating convoluted journal articles. Deep learning and deepfake creation involve many layers of neural networks, so much so that the computer can train and correct itself.

While deepfake technology might seem harmful in itself, it’s aided by how quickly social media users spread the information, often without pausing to question its source.

“Deepfakes as a production technology presents a lot of concern,” Barbara Miller, co-curator of the exhibit and deputy director for curatorial affairs at the Museum of the Moving Image, says. “I think it’s impossible to think about that concern without looking at the lightning speed that information circulates.”

But the effective spread of misinformation predates deepfakes and even social media. The exhibit showcases deepfakes in the context of the long history of “unstable nonfiction media,” Miller adds, so visitors aren’t left with the assumption that the rise of AI-driven manipulation is the source of all distrust in media. 

“These are techniques that have always existed for as long as the media itself has existed,” Burgund says. 

Using basic video editing skills, almost anyone can slice and dice footage to change the meaning or tone.

In the 1890s, the Edison Manufacturing Company was eager to flex the capabilities of motion pictures by capturing the Spanish-American War on camera. However, cameras in the 19th century were a whole lot clunkier than those today, making it difficult to film combat close up. So, the company scattered staged footage of American soldiers swiftly defeating enemy regiments among the real footage of marching soldiers and weaponry. The cuts stoked patriotism among American viewers, who weren’t told the difference between the real and fake scenes. 

Even today, you do not need AI to create effective and impactful disinformation. “The tried and true methods of manipulation that have been used forever are still effective,” Burgund says. Even putting the wrong caption on a photo, without even editing the image, can create misinformation, he explains.

Take the 2023 presidential election, for example. In the months leading up to it, Miller says there was worry that deepfakes could throw a wrench in the democratic process. However, the technology didn’t really make a big splash during the election, at least when compared to cruder forms of manipulation that were able to spread misinformation successfully.  

Using basic video editing skills, almost anyone can slice and dice footage to change the meaning or tone. These are called “cheapfakes” or “shallowfakes” (the spliced Spanish-American war videos were one of the earliest instances). The intro to In Event of Moon Disaster uses these techniques on archival footage to make it seem like Apollo 11 crashed. The directors interspersed footage of the lunar lander returning between quick cuts of the astronauts and set it to the soundtrack of accelerating beeping and static noises to create the anxiety-inducing illusion that the mission went awry. Because these techniques require minimal expertise and little more than a laptop, they are much more pervasive than deepfakes.

In fact, some of the most well-known videos that have been debated to be deepfakes are actually cheapfakes. In 2023 Rudolph Giuliani, then-President Donald Trump’s lawyer, tweeted a video of Nancy Pelosi in which she appeared to slur her words, leading some of her critics to assert that she was drunk. The video was found to have been edited and slowed down but did not use any deepfake technology.

Burgund and his co-director, Francesca Panetta, think that confirmation bias is really what aids the dissemination of deepfakes or cheapfakes, even when they’re clearly poorly made. “If the deepfake is portraying something that you want to believe, then it hardly has to look real at all,” Burgund says.

Slowing the spread of deepfakes

While it currently requires some technical know-how to create a deepfake like Burgund and Panetta’s, Matthew Wright, the director of research for Rochester Institute of Technology’s Global Cybersecurity Institute and a professor of computing security, says the technology is quickly spreading to the masses, and there are already many deepfake apps and software.

“This is democratizing a potentially dangerous technology,” Wright says. 

There are efforts to slow or counteract the spread of deepfakes, however. While the usual impulse among tech researchers is to share methods and tools with the public, Wright says some of the experts developing new deepfake technologies have vowed to keep their results more private. Additionally, there are projects such as the Content Authenticity Initiative, which is a consortium of companies and organizations like Adobe, Twitter, and the New York Times that aims to track the origins of media by watermarking them even if they are edited. This is not a perfect solution, Wright says, because there are ways to bypass those checks. Still, if every video coming out of the White House, say, was digitally watermarked, then it could slow or prevent their manipulation. 

Wright is working on creating deepfake detection tools that could be used by journalists and regular internet users. (Microsoft launched a similar product in 2023.) Wright says he and his colleagues are very careful about not sharing all of the source code because it’s possible someone could create a deepfake to deceive these detectors if they had access to it. But if there’s a diversity of authenticators, there’s less of a chance of that happening.

“As long as multiple detection tools are actually being used against these videos, then I think overall our chances of catching [deepfakes] are pretty good,” Wright says. 

The 2023 documentary Welcome to Chechnya used deepfake technology to mask the faces of its vulnerable subjects.

The values of deepfake technology

You may have encountered the benefits of deepfakes in entertainment, like in the most recent Star Wars films, or in satire, like this Star Trek throwback with Jeff Bezos and Elon Musk’s faces subbed in. However, the technology also has utility in human rights and disability accessibility.

The Museum of the Moving Image exhibit features clips from Welcome to Chechnya, an award-winning documentary by David France that uses deepfake technology to conceal the true faces of LGBTQ activists facing persecution in the Russian republic. This allows the viewer to see the emotion of the subjects while still protecting their identities.

The technology has also been used to improve accessibility for those who have lost their voice due to an illness, injury, or disability, such as Lou Gehrig’s disease, Burgund says. VocaliD, for instance, uses AI to recreate the user’s voice from old recordings for text-to-speech technology, or help them pick a voice that best fits their personality from a bank of options. 

[Related: Deepfakes could help us relive history—or rewrite it]

While Panetta and Burgund want the viewers of their deepfake to interrogate the origins of the media they encounter, they don’t want the audience to be alarmed to the point of creating a zero-trust society.

“This is not about trying to scare people into not believing anything they see,” Panetta says, “because that is as problematic as the misinformation itself.”

Just like trust in media can be weaponized, distrust in media can be weaponized, too. 

As the exhibit points out, even the theoretical existence of deepfakes results in a “liar’s dividend,” where one can insinuate a real video is a deepfake to sow seeds of doubt.

In 2023, Gabonese President Ali Bongo Ondimba gave a New Year’s address after suffering a stroke and being out of the public eye as a result. His political rivals said that he looked unnatural and pushed the idea that the video was a deepfake. While experts agreed the video seemed off, no one could say for sure it was a deepfake or not, with some attributing the peculiarity of Bongo’s appearance to his poor health. A week later, citing the oddness of the video, his opponents attempted a coup but were unsuccessful. 

Wright says that he and his colleagues have started to see more of these cry-wolf situations in the political sphere than actual deepfakes circulating and causing damage. “There can be deepfakes, but they’re not that commonly used,” he says. “What you need to do is understand the source.”

For anyone who’s inundated with information while scrolling through social media and the news, it’s important to pause and question, “how did this information reach me? Who is disseminating this? And can I trust this source?” Doing that can determine whether a deepfake (or cheapfake) becomes potent misinformation or just another video on the internet.  

Deepfake: Unstable Evidence on Screen will be on display at the Museum of the Moving Image in Queens, New York through May 15, 2023.

Update the detailed information about How To Block Posts Based On Language On Mastodon on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!