Connect with us

FACEBOOK

Facebook’s Oversight Board already ‘a bit frustrated’ — and it hasn’t made a call on Trump ban yet

Published

on

Facebook’s Oversight Board already ‘a bit frustrated’ — and it hasn’t made a call on Trump ban yet

The Facebook Oversight Board (FOB) is already feeling frustrated by the binary choices it’s expected to make as it reviews Facebook’s content moderation decisions, according to one of its members who was giving evidence to a UK House of Lords committee today which is running an enquiry into freedom of expression online.

The FOB is currently considering whether to overturn Facebook’s ban on former US president, Donald Trump. The tech giant banned Trump “indefinitely” earlier this year after his supporters stormed the US capital.

The chaotic insurrection on January 6 led to a number of deaths and widespread condemnation of how mainstream tech platforms had stood back and allowed Trump to use their tools as megaphones to whip up division and hate rather than enforcing their rules in his case.

Yet, after finally banning Trump, Facebook almost immediately referred the case to it’s self-appointed and self-styled Oversight Board for review — opening up the prospect that its Trump ban could be reversed in short order via an exceptional review process that Facebook has fashioned, funded and staffed.

Alan Rusbridger, a former editor of the British newspaper The Guardian — and one of 20 FOB members selected as an initial cohort (the Board’s full headcount will be double that) — avoided making a direct reference to the Trump case today, given the review is ongoing, but he implied that the binary choices it has at its disposal at this early stage aren’t as nuanced as he’d like.

“What happens if — without commenting on any high profile current cases — you didn’t want to ban somebody for life but you wanted to have a ‘sin bin’ so that if they misbehaved you could chuck them back off again?” he said, suggesting he’d like to be able to issue a soccer-style “yellow card” instead.

“I think the Board will want to expand in its scope. I think we’re already a bit frustrated by just saying take it down or leave it up,” he went on. “What happens if you want to… make something less viral? What happens if you want to put an interstitial?

“So I think all these things are things that the Board may ask Facebook for in time. But we have to get our feet under the table first — we can do what we want.”

“At some point we’re going to ask to see the algorithm, I feel sure — whatever that means,” Rusbridger also told the committee. “Whether we can understand it when we see it is a different matter.”

To many people, Facebook’s Trump ban is uncontroversial — given the risk of further violence posed by letting Trump continue to use its megaphone to foment insurrection. There are also clear and repeat breaches of Facebook’s community standards if you want to be a stickler for its rules.

Among supporters of the ban is Facebook’s former chief security officer, Alex Stamos, who has since been working on wider trust and safety issues for online platforms via the Stanford Internet Observatory.

Stamos was urging both Twitter and Facebook to cut Trump off before everything kicked off, writing in early January: “There are no legitimate equities left and labeling won’t do it.”

But in the wake of big tech moving almost as a unit to finally put Trump on mute, a number of world leaders and lawmakers were quick to express misgivings at the big tech power flex.

Germany’s chancellor called Twitter’s ban on him “problematic”, saying it raised troubling questions about the power of the platforms to interfere with speech. While other lawmakers in Europe seized on the unilateral action — saying it underlined the need for proper democratic regulation of tech giants.

The sight of the world’s most powerful social media platforms being able to mute a democratically elected president (even one as divisive and unpopular as Trump) made politicians of all stripes feel queasy.

Facebook’s entirely predictable response was, of course, to outsource this two-sided conundrum to the FOB. After all, that was its whole plan for the Board. The Board would be there to deal with the most headachey and controversial content moderation stuff.

And on that level Facebook’s Oversight Board is doing exactly the job Facebook intended for it.

But it’s interesting that this unofficial ‘supreme court’ is already feeling frustrated by the limited binary choices it’s asked them for. (Of, in the Trump case, either reversing the ban entirely or continuing it indefinitely.)

The FOB’s unofficial message seems to be that the tools are simply far too blunt. Although Facebook has never said it will be bound by any wider policy suggestions the Board might make — only that it will abide by the specific individual review decisions. (Which is why a common critique of the Board is that it’s toothless where it matters.)

How aggressive the Board will be in pushing Facebook to be less frustrating very much remains to be seen.

“None of this is going to be solved quickly,” Rusbridger went on to tell the committee in more general remarks on the challenges of moderating speech in the digital era. Getting to grips with the Internet’s publishing revolution could in fact, he implied, take the work of generations — making the customary reference the long tail of societal disruption that flowed from Gutenberg inventing the printing press.

If Facebook was hoping the FOB would kick hard (and thorny-in-its-side) questions around content moderation into long and intellectual grasses it’s surely delighted with the level of beard stroking which Rusbridger’s evidence implies is now going on inside the Board. (If, possibly, slightly less enchanted by the prospect of its appointees asking it if they can poke around its algorithmic black boxes.)

Kate Klonick, an assistant professor at St John’s University Law School, was also giving evidence to the committee — having written an article on the inner workings of the FOB, published recently in the New Yorker, after she was given wide-ranging access by Facebook to observe the process of the body being set up.

The Lords committee was keen to learn more on the workings of the FOB and pressed the witnesses several times on the question of the Board’s independence from Facebook.

Rusbridger batted away concerns on that front — saying “we don’t feel we work for Facebook at all”. Though Board members are paid by Facebook via a trust it set up to put the FOB at arm’s length from the corporate mothership. And the committee didn’t shy away or raising the payment point to query how genuinely independent they can be?

“I feel highly independent,” Rusbridger said. “I don’t think there’s any obligation at all to be nice to Facebook or to be horrible to Facebook.”

“One of the nice things about this Board is occasionally people will say but if we did that that will scupper Facebook’s economic model in such and such a country. To which we answer well that’s not our problem. Which is a very liberating thing,” he added.

Of course it’s hard to imagine a sitting member of the FOB being able to answer the independence question any other way — unless they were simultaneously resigning their commission (which, to be clear, Rusbridger wasn’t).

He confirmed that Board members can serve three terms of three years apiece — so he could have almost a decade of beard-stroking on Facebook’s behalf ahead of him.

Klonick, meanwhile, emphasized the scale of the challenge it had been for Facebook to try to build from scratch a quasi-independent oversight body and create distance between itself and its claimed watchdog.

“Building an institution to be a watchdog institution — it is incredibly hard to transition to institution-building and to break those bonds [between the Board and Facebook] and set up these new people with frankly this huge set of problems and a new technology and a new back end and a content management system and everything,” she said.

Rusbridger had said the Board went through an extensive training process which involved participation from Facebook representatives during the ‘onboarding’. But went on to describe a moment when the training had finished and the FOB realized some Facebook reps were still joining their calls — saying that at that point the Board felt empowered to tell Facebook to leave.

“This was exactly the type of moment — having watched this — that I knew had to happen,” added Klonick. “There had to be some type of formal break — and it was told to me that this was a natural moment that they had done their training and this was going to be moment of push back and breaking away from the nest. And this was it.”

However if your measure of independence is not having Facebook literally listening in on the Board’s calls you do have to query how much Kool Aid Facebook may have successfully doled out to its chosen and willing participants over the long and intricate process of programming its own watchdog — including to extra outsiders it allowed in to observe the set up.

The committee was also interested in the fact the FOB has so far mostly ordered Facebook to reinstate content its moderators had previously taken down.

In January, when the Board issued its first decisions, it overturned four out of five Facebook takedowns — including in relation to a number of hate speech cases. The move quickly attracted criticism over the direction of travel. After all, the wider critique of Facebook’s business is it’s far too reluctant to remove toxic content (it only banned holocaust denial last year, for example). And lo! Here’s its self-styled ‘Oversight Board’ taking decisions to reverse hate speech takedowns…

The unofficial and oppositional ‘Real Facebook Board’ — which is truly independent and heavily critical of Facebook — pounced and decried the decisions as “shocking”, saying the FOB had “bent over backwards to excuse hate”.

Klonick said the reality is that the FOB is not Facebook’s supreme court — but rather it’s essentially just “a dispute resolution mechanism for users”.

If that assessment is true — and it sounds spot on, so long as you recall the fantastically tiny number of users who get to use it — the amount of PR Facebook has been able to generate off of something that should really just be a standard feature of its platform is truly incredible.

Klonick argued that the Board’s early reversals were the result of it hearing from users objecting to content takedowns — which had made it “sympathetic” to their complaints.

“Absolute frustration at not knowing specifically what rule was broken or how to avoid breaking the rule again or what they did to be able to get there or to be able to tell their side of the story,” she said, listing the kinds of things Board members had told her they were hearing from users who had petitioned for a review of a takedown decision against them.

“I think that what you’re seeing in the Board’s decision is, first and foremost, to try to build some of that back in,” she suggested. “Is that the signal that they’re sending back to Facebook — that’s it’s pretty low hanging fruit to be honest. Which is let people know the exact rule, given them a fact to fact type of analysis or application of the rule to the facts and give them that kind of read in to what they’re seeing and people will be happier with what’s going on.

“Or at least just feel a little bit more like there is a process and it’s not just this black box that’s censoring them.”

In his response to the committee’s query, Rusbridger discussed how he approaches review decision-making.

“In most judgements I begin by thinking well why would we restrict freedom of speech in this particular case — and that does get you into interesting questions,” he said, having earlier summed up his school of thought on speech as akin to the ‘fight bad speech with more speech’ Justice Brandeis type view.

“The right not to be offended has been engaged by one of the cases — as opposed to the borderline between being offended and being harmed,” he went on. “That issue has been argued about by political philosophers for a long time and it certainly will never be settled absolutely.

“But if you went along with establishing a right not to be offended that would have huge implications for the ability to discuss almost anything in the end. And yet there have been one or two cases where essentially Facebook, in taking something down, has invoked something like that.”

“Harm as oppose to offence is clearly something you would treat differently,” he added. “And we’re in the fortunate position of being able to hire in experts and seek advisors on the harm here.”

While Rusbridger didn’t sound troubled about the challenges and pitfalls facing the Board when it may have to set the “borderline” between offensive speech and harmful speech itself — being able to (further) outsource expertise presumably helps — he did raise a number of other operational concerns during the session. Including over the lack of technical expertise among current board members (who were purely Facebook’s picks).

Without technical expertise how can the Board ‘examine the algorithm’, as he suggested it would want to, because it won’t be able to understand Facebook’s content distribution machine in any meaningful way?

Since the Board currently lacks technical expertise, it does raise wider questions about its function — and whether its first learned cohort might not be played as useful idiots from Facebook’s self-interested perspective — by helping it gloss over and deflect deeper scrutiny of its algorithmic, money-minting choices.

If you don’t really understand how the Facebook machine functions, technically and economically, how can you conduct any kind of meaningful oversight at all? (Rusbridger evidently gets that — but is also content to wait and see how the process plays out. No doubt the intellectual exercise and insider view is fascinating. “So far I’m finding it highly absorbing,” as he admitted in his evidence opener.)

“People say to me you’re on that Board but it’s well known that the algorithms reward emotional content that polarises communities because that makes it more addictive. Well I don’t know if that’s true or not — and I think as a board we’re going to have to get to grips with that,” he went on to say. “Even if that takes many sessions with coders speaking very slowly so that we can understand what they’re saying.”

“I do think our responsibility will be to understand what these machines are — the machines that are going in rather than the machines that are moderating,” he added. “What their metrics are.”

Both witnesses raised another concern: That the kind of complex, nuanced moderation decisions the Board is making won’t be able to scale — suggesting they’re too specific to be able to generally inform AI-based moderation. Nor will they necessarily be able to be acted on by the staffed moderation system that Facebook currently operates (which gives its thousand of human moderators a fantastically tiny amount of thinking time per content decision).

Despite that the issue of Facebook’s vast scale vs the Board’s limited and Facebook-defined function — to fiddle at the margins of its content empire — was one overarching point that hung uneasily over the session, without being properly grappled with.

“I think your question about ‘is this easily communicated’ is a really good one that we’re wrestling with a bit,” Rusbridger said, conceding that he’d had to brain up on a whole bunch of unfamiliar “human rights protocols and norms from around the world” to feel qualified to rise to the demands of the review job.

Scaling that level of training to the tens of thousands of moderators Facebook currently employs to carry out content moderation would of course be eye-wateringly expensive. Nor is it on offer from Facebook. Instead it’s hand-picked a crack team of 40 very expensive and learned experts to tackle an infinitesimally smaller number of content decisions.

“I think it’s important that the decisions we come to are understandable by human moderators,” Rusbridger added. “Ideally they’re understandable by machines as well — and there is a tension there because sometimes you look at the facts of a case and you decide it in a particular way with reference to those three standards [Facebook’s community standard, Facebook’s values and “a human rights filter”]. But in the knowledge that that’s going to be quite a tall order for a machine to understand the nuance between that case and another case.

“But, you know, these are early days.”

TechCrunch

FACEBOOK

[OPINION] The promise of technology is the promise of people

Published

on

[OPINION] The promise of technology is the promise of people

I would like for you to imagine the promise of technology. Facebook promises to be the gateway to your friends and family, ridesharing and delivery apps efficiency and connection against the grueling commute, your internet service provider cutting-edge reliability and speed. Sometimes, they even give you the promise of the world. When we strip away the allure of technology, what are we left with? A world of disconnect fueled by antagonism and shock that is filtered by content moderators, a non-solution to a systemic transportation crisis that leave us stories of drivers exploited, and aggravated calls on your internet plan. You haven’t quite been given the world — you can’t even connect to your meeting. 

I would like for you to imagine who is behind technology. These promises, delivered or not, are given to us by tech CEOs and eagerly embraced across the world. We hunger for solutions to age-old problems from communication, transportation, news, education, energy, and love — and are eager to receive engineered solutions to these. In turn, those wielding technology offer endless streams to support new entrepreneurs, startups, and products to move us towards wealth and prosperity, each one supposedly more innovative than the last.

Our lives continuously cede to these platforms: our memories live in Facebook albums or the cloud, the rise and fall of political movements can be witnessed online — sometimes excusing us from on-the-grounds participation, developments in artificial intelligence offer us quicker answers, and we favor the simplicity offered a tap away. A hyper-efficient world aided by machines seems to solve society’s ills, until it becomes a sickness in itself.

The invisible laborers behind technology

In truth, our technological futures are built atop of obscured human labor. A phenomenon termed as “ghost work” by anthropologist Mary L. Gray refers to “work performed by a human which a customer believes is being performed by an automated process.”

Take ChatGPT, a general-purpose chatbot released in November 2022 that provides text responses near-instantaneously. It can help you with anything: writing emails, synthesizing data, or even programming itself. 

No machine thinks for itself. Models like ChatGPT are only able to impress us because they build on the breadth of human work, and thus carry the constraints and failures that accompany it. This begins a questioning of this “breadth” in the first place: who designs these models (and their intent), the data these models are trained on, and how this data is classified — of which all steps involve humans.

Widely lauded, universities are rushing to find solutions to potential cheating aided by ChatGPT. College-educated workers, even programmers themselves, begin to worry about employment as their labor seems increasingly replaceable by machines, even if it’s just new labor under the hood that we’re bending towards. 

ChatGPT’s success can largely be attributed to its palatability. While chatbots are not new, the lack of obscenity and profanity in one is. Human input is present at every step of design. The best and worst of humanity is fed into language models (hence the previous issues with obscenity and extremism). Human-aided supervision and reinforcement learning guide these model’s outputs. To ensure ChatGPT was unlike its predecessors, OpenAI recruited an outsourcing firm in Kenya to help design a safer model. The process? To have these outsourced workers manually label examples of profanity, violence, and hate speech to be filtered out, in exchange for pay about $2 (P108) an hour.

This is not a far cry. The Global South has long endured these roles, becoming the invisible army that powers every impressive technology.

Take Facebook for instance, ubiquitous enough that there are countries that understand it as the internet itself. A study conducted by Helani Galpaya showed that more respondents across several countries (including the Philippines) self-reported being “Facebook users” than “internet users.” Meanwhile, Filipino content moderators under intensely-surveilled working conditions screen reports, exposing themselves to graphic sexual content, violence, and extremism on a daily basis. It is incredibly dehumanizing, mentally taxing work that many of us cannot fathom because we’ve never seen it. It is of our best interest to only see the light. It appears that those who gate the internet are often the most gated from the internet themselves.

Who gets to be called a technologist?

Millions of Filipinos enter Business Process Outsourcing (BPO), data-labeling, or content moderation jobs to support the technological infrastructure and rapid pace of “innovation.” Enticed with decent pay, often posted with little to no qualifications necessary, and done in recruitment hub hiring sprees, it’s hard to deny the opportunity to join the workforce and indulge in the industry’s economic promise. Silicon Valley startups (or even the Filipino “Sinigang Valley”) use the excuse of economic opportunity to justify remote outsourcing.

Even those not literally invisible are devalued with this mindset. Underexploited laborers act as the on-demand service providers beneath the shiny interfaces on our phones: our food delivery drivers, content moderators that clean our TikTok feeds, and support staff. Technology is something that can be summoned and controlled, people cannot be — or shouldn’t be.

After all, for technology to be consumable, it has to be palatable. Palatability involves shrouding the violent, intensive human labor needed to maintain technologies. This is why we are moved when we see the Facebook post of a delivery driver left to bear the brunt of canceled orders, wading through weather. Or with “older” technologies: how we turn a blind eye to ruthless production factories that power the fast fashion industry. It reminds us, for a brief moment, of the humanity in everything around us. Instead, companies continue to express technology as the stuff of magic. Perfectly cheap, efficient, and convenient. Then we are moved to hit checkout.

Even Silicon Valley’s model of classically educated laborers are no longer safe themselves. Microsoft has begun talks to invest $10 billion into OpenAI, while at the same time announcing layoffs for 10,000 workers. They are joined by Google and Amazon among others, all companies previously touted to push the boundaries of innovation. As we head towards a global economic downturn, it appears that this at-will treatment previously reserved for the global south now spares no one.

Tech workers, whether working as ride-share drivers, content moderators, or BS Computer Science-educated software engineers — must come together in solidarity with consumers against an industry that has historically erased its people. 

We need to call into question who the “technologists” that drive innovation are, especially when this innovation is at the expense of people. We need to recognize the breadth of forms that a technologist takes, and the truth that the massive forces of labor that write code, serve content, and protect us are continuously exploited. We need to know that maintaining a myopic view of the role of a “technologist” glorifies “technology” alone, detaching it from the human workforce that powers it. Without these laborers, these technologies would effectively be nothing. 

At the end of the day, technology is nothing but a tool. Technology is shaped by people, for people.

I’m not discounting technology’s potential for economic empowerment; I disparage how technology has been used as an exploitative force rather than a transformative one. It is time to reclaim technology and look towards its potential for hope — where this act of reclamation begins with power placed on all tech workers rather than the few.

I want a world where technology is used to put us in dialogue with one another, breaking down barriers instead of enacting more walls that hide us from one another. I want a world where machines don’t replace artists, but instead help more people make more art. I believe in a world where technology is a tool rather than the solution, where we have agency to use it as we please. I believe in a world where we think of people, first and foremost, not over-optimization and hyper-efficiency. I believe in a world where technology is a communal medium in which we can imagine better futures, where everyone is a technologist and engineer, not a tool wielded by the few. 

As technology is a tool, it is time for us to take it back. The truly magical part about technology is that it might be the most human thing about us. It is shaped by people, for people. – Rappler.com

Chia Amisola is Product Designer based in San Francisco, California who graduated with a BA in Computing and the Arts from Yale University in 2022. They are the founder of Developh and the Philippine Internet Archive.

Source link

Continue Reading

FACEBOOK

How a meme gave Khe Huy Quan his most significant role

Published

on

How a meme gave Khe Huy Quan his most significant role

(Credits: Far Out / Press / A24)

Film

Oscar nominee Ke Huy Quan’s acting career has come in two parts, several decades distanced from one another. Having played Short Round in 1984’s Indiana Jones and the Temple of Doom and also performed in The GooniesEncino Man and Head of the Class, Quan took the decision to quit acting in 1992 as he struggled to make the significant progress he was hoping for.

Fast forward to 2021, and Quan secured the role in one of the most celebrated films of last year, Everything Everywhere All at Once, for which he won a Golden Globe and was this week nominated for the Academy Award for Best Supporting Actor.

Asked how the two Daniels (Daniel Kwan and Daniel Scheinert) came to cast Quan in Everything Everywhere All at Once during a Hollywood Reporter Actor’s Roundtable, Quan responded: “I decided to get back to acting. It was when the Daniels saw somebody did a joke on Facebook, and it was a picture of Andrew Yang running for President. The caption said Short Round is all grown up and he’s running for President, which triggered him to go, ‘Oh, I wonder what Khe is doing?’”

Thankfully for Quan, somebody online made that stupid meme. He added: “[Daniel] started searching, and he was doing the calculations, ‘Oh, he’s about the same age as his character’. It was at the same time that I called an agent friend of mine – I didn’t have an agent for decades – so I was practically begging him to represent me. He said yes.”

Fortunately, the two Daniels were looking for someone of Quan’s ilk just as he had decided to give acting another shot – some 30 years later. Quan went on: “Literally two weeks later, I got a call about the script, and I read it, and I was blown away by the script. Not only was it beautifully written, but it was a script I wanted to read. I was so hungry, so eager for a script like this, for a role like this.”

In fact, the script was so good that Quan remembers staying up all night “reading it until like 5am”. He added: “I sat there, and in my head, I had all these ideas that I wanted to do with this role, and I was watching out the window, the sun was rising, and I said, ‘Oh, I have to go to sleep’, because my audition was in the afternoon.”

However, despite his desire to secure the part, a wave of doubt overcame Quan. “Right before I went to bed, I go, ‘There’s no way they would offer me this.’ It was like impossible; it stars Michelle Yeoh and Jamie Lee Curtis,” he said. But Quan’s wife reassured him of his abilities and “kept encouraging” him.

Quan noted that it had been 25 years since he last auditioned for a part, so naturally, he was nervous. However, he was made comfortable by the Daniels and the film’s casting director, whom he called “amazing” and “so sweet”. Yet he must have feared the worst when he did not hear back for two months. I auditioned and didn’t hear from them for two months. 

The long wait left Quan feeling “miserable” because he “wanted this role so bad.” Then, the call suddenly came in. “I went in to audition for the second time,” he said, which laid the foundations for one of the most important phone calls Quan would ever receive. He added: “You hear those three words, ‘We want you’, and I was screaming so loud, I was jumping up so high, and to this day, I cannot believe how everything came to be.

Source link

Continue Reading

FACEBOOK

Mystery shaking, rumbling felt along Jersey Shore again. No earthquakes reported.

Published

on

Mystery shaking, rumbling felt along Jersey Shore again. No earthquakes reported.

For the second time this month, residents across southern New Jersey have been reporting long periods of shaking inside their homes Thursday afternoon, with windows and walls rattling. And just like before, there have been no earthquakes reported anywhere in the eastern United States.

There also have been no thunderstorms reported in or near New Jersey on Thursday, but some residents are speculating the rattling inside their homes — along with some reports of loud booms — may be linked to military planes and helicopters flying over the Garden State.

Naval Air Station Patuxent River, a U.S. naval station based in St. Mary’s County in Maryland, issued a noise advisory on its Facebook page Tuesday, saying it would be conducting “noise-generating testing events” between Tuesday and Friday.

“Pilots at NAS Patuxent River will be conducting Field Carrier Landing Practices (FCLPs). FCLPs are simulated carrier landings conducted to prepare the pilot to land safely on an aircraft carrier,” the agency said in its Facebook post.

“The practices consist of series of touch-and-go maneuvers, called ‘bounces.’ Airspeed, altitude and power are all precisely choreographed in order for a pilot to approach the ship within an acceptable window to land on the deck safely,” the post added.

“Residents may notice increased noise levels due to these operations,” the post said.

It wasn’t immediately known how far away the noise would carry. But Facebook has been packed with reports of shaking in homes and businesses across South Jersey Thursday afternoon. The first was around 11 a.m. and the second about two hours later.

Several residents noted they have felt some shaking or heard some loud booms in the past, but they said they never felt the rattling become as intense as it was on Thursday.

Among the towns or sections of towns where rattling was reported were Erma, Cape May, Galloway, Middle Township, North Cape May, Rio Grande and Smithville. Some residents said they felt their houses shake but heard no booms, while others said they heard loud booms.

“My whole house shook. Windows rattle(d), bed moved back and forth. And it was long,” one resident wrote on the Facebook page of South Jersey weather forecaster “Nor’easter Nick” Pittman. “I do hear the jets as I’m in Galloway near the airport, but this just seemed different. No boom, just steady shaking. At first I thought it was the wind but it got stronger.”

Another Facebook user in Atlantic County said: “In Smithville we just shook for a good 45-60 seconds with a small pause, but the dog and cats did not like it, this time was more than the sonic boom or break that we feel at 2 p.m. It was freaky!!”

On Friday, Jan. 13, residents from as far south as Cape May and up to Manahawkin along the coast and as far west as Glassboro in Gloucester County reported feeling shaking in their homes. They said the rattling lasted at least 10 seconds.

A supersonic military airplane was flying a few miles off the coast that day, and could have been the cause of the rumbling, the Press of Atlantic City reported at the time. The military has an Atlantic test track for flights about 3 miles off the eastern seaboard, and a sonic boom would occur if a plane was flying fast enough to break the sound barrier.

South Jersey isn’t alone when it comes to feeling and hearing loud noises. In early January, a loud boom — which some described as being as loud as an explosion — was reported by many people in northern New Jersey and northeastern Pennsylvania.

The cause of that boom was not immediately determined.

___

© 2023 Advance Local Media LLC

Distributed by Tribune Content Agency, LLC.

Source link

Continue Reading

Trending

en_USEnglish