top of page

Discussion Posts

Discussion 1:
New York Times v. United States

As a journalist/editor, I would absolutely publish the Pentagon Papers just off of the strength of how juicy the information within the documents were. Putting myself in this position, I would feel a duty to let the American people know that they were more or less being deceived by their government. According to the Pentagon Papers, American forces more or less expanded their role in the war by bombing Cambodia and Laos, and the only reason that the war was active for as long as it was was to obscure the fact that they were losing. Keep in mind that this is at a time where the American people believed not only that there is a strong reasoning behind fighting the war, but that the U.S. was having a measure of success in it. Seeing as the Pentagon Papers are more so an analysis of the American wartime efforts rather than an active document, the line between the public deserving to know vs. national security doesn’t exist.

 

Prior restraint would be unconstitutional in the event that a TV network planned to broadcast flag-draped coffins being returned to the U.S. on the grounds that it is not obscene, and there are no actions or processes that hinge on these images not being broadcasted.

 

Prior restraint would be unconstitutional in the event that a magazine wanted to publish an interview and photo of a seriously wounded soldier (assuming that this is not an ‘active wound’, it wouldn’t be obscene). There is no action or process that hinges on that interview and image not being shown.

 

Prior restraint would probably be constitutional in the event that a newspaper article wanted to publish a response plan to a terrorist attack. If the government decided to pursue prior restraint in this, they would have strong grounds in the sense that it would be dangerous for would-be terrorists to know exactly what a city’s response would be in the event that they decided to strike.

 

Prior restraint would be constitutional in the event that a map of Iraq with troop positions were in a report. There is a direct and absolute danger to these soldiers as well as American wartime efforts by making active military information public knowledge.

Discussion 2:
Speech Harms & Symbolic Speech

Harm is defined as words that disrupt organized activities or offend/degrade people. This definition extends to threats and harassment, and serve as a catalyst for (often violent) conflict. The types of speech this applies to is offensive speech, "fighting words" which inflict or incite an immediate breach of peace just on their 'very utterance', hate speech, which demeans or attempts to devalue others on the basis of their identity. It also includes threats and intimidation, like in Virginia v. Black where cross burning (a means of intimidating black households into moving away) is a type of harm, and it met the criteria to be considered a threat in the sense that it persuaded fear of violence

​

Media can incite harm 'illegally' by specifically advocating for a form of violence, for instance, if a TV show explicitly says young people should fight more often and someone hurts someone after watching the show, that is an incitement of physical harm. Media can harm with negligence too. If a reasonable person could foresee that a depiction in media would leave to a punishable offense, that is negligence. The example in the book is probably the best - in allowing an advertisement that clearly is advocating for a murder and inevitably leads to a murder, the publisher is negligent and there is much proximate cause. I agree with these concepts within reason, proximate cause is very important because as someone who plays media like video games, and as a reasonable person, I have never wanted to carry out fantastical, dangerous or violent acts depicted in the games, and proximate cause helps distinguish what is a direct result of a piece of media versus an individual decision by the person.

​

Symbolic speech is a form of expression that, while possibly upsetting to certain audiences, doesn't quite constitute incitement or the aforementioned speech harms. Symbolic speech is protected from punishment in the case that a government entity wants to shut down an idea they don't like. For instance, in Virginia v. Black, burning a cross to intimidate is harmful speech, but (to play devils advocate) burning a cross during a rally where only KKK members are in attendance (think the scene in Bad Boys II) would be considered protected symbolic speech so long as there are no direct threats towards anyone. Another form of protected speech would be if somebody burned their vaccine card in protest to imminent legislation on vaccine mandates. A final form of protected speech would be something like defacing/altering or burning the American flag.

Discussion 3:
Fake News & The Law

In my discussion post, I touched on two articles in an attempt to discern truth from falsity and fact (or statement of fact) from opinion. The first article concerned itself with the African government blocking communication lines of citizens. The article in question was scrutinized against the elements of truth depicted in our textbook, and by the end of it, it was clear that Ope Adetayo's article satisfied the criteria for 'truthful' news. The first big clue in this discovery was the fact that the Ollman test didn't quite apply to it; It clearly was not parody or satirical. There were multiple viewpoints within the article and unbiased viewpoints, and both of these elements indicate that the newsgathering process was trustworthy and legitimate. 

​

The second article was a bit shorter, and concerned itself with an alleged air strike on an airport in Damascus, Syria, with the finger pointed at the Israeli Air Force. Right away, the problem with this article was that there was blame pointed at a government agency for what is seemingly a concerning display of force without any reliable source. Within the short article, it appears to be more of a rumor than a report. They relied solely on published material, and both of the sources of this material were either not incredibly legitimate or defunct altogether. It's not beyond the realm of belief to conclude that the story wasn't thoroughly researched or investigated on the author's part, and therefore doesn't constitute a 'truthful' newsgathering process in the way that the aforementioned article does.

Discussion 4:
Regulation of Privacy

1. Facebook, Instagram, YouTube, and other similar platforms are places that primarily serve two purposes: personal expression and content creation. Therefore, the privacy implications that are commonly outlined in the terms of service agreements therein have more to do with you giving up privacy rather than granting it to you. Essentially, with the usage of any of these apps, the terms of service express what privacies you may expect, then explicitly outline which privacies you should not expect. For instance, Facebook and Instagram can use any of the information you post for advertising, and they don't have to pay you. You essentially waive any expectation of privacy when associating with these applications.

​

2. When it comes to finding reasons to disable/enable settings, according to Time Magazine, applications like TikTok include the collection of incredibly personal data in their terms of service, more specifically biometric data of 'faceprints' or 'voiceprints'. This could be a (legal) infringement of privacy that may affect your motivation to use the content creation tools present on TikTok. According to the Washington Post and NY Times, Facebook and other similar apps now have to ask you for permission to track your iPhone usage (this is such a new and widespread development that I'm sure everyone with an Apple product notices the question pop up when downloading a new app). This mandatory pop-up gives users the ability to explicitly deny an app from infringing on what they may believe constitutes their privacy.

​

3. It's important to take ownership of what you share in online spaces because there are risks associated with not doing so. Platforms which claim the items you publish is 'user-generated content' have immunity from it's consequences as a result of Section 230 of the Communications Decency act. Therefore, you are held responsible for the contents of that which you publish. In addition to this, taking ownership of your content includes things like protecting it from copyright infringement.

​

4. The FTC protects consumer privacy rights by cracking down on unfair practices or policies implemented by commercial organizations. Under the watchful eye of the FTC, organizations are not only required to keep certain information private (like credit card information, health information, etc.) but are also required to be fully transparent in it's messages to consumers or stakeholders otherwise, protecting consumers from deceptive advertising. They take legal action against these companies on the basis of them violating certain legislation, like the Fair Credit Reporting Act for example.

Discussion 5:
Purchase of Twitter by Musk

There are tons of unfounded concerns revolving around Elon Musks' alleged/imminent purchase of Twitter. In response to Business Today's take, and as an avid Twitter user, I can't remember an instance where Elon has posted anything that would warrant extensive corrective action against him, so to say that that is his primary goal in acquiring Twitter sort of takes away from the discussion that really needs to be had: What is Musk's definition of a social media platform with free speech?


Now responding to Time Magazine's take that a priority on free speech on sites drowns out civil discourse in favor of harassment, I have a personal retort: Twitter was never really a place for true civil discourse in the first place. So any assumptions that it would have a negative or positive effect on democracy are inflated. If a purchase by Elon Musk could in any way devalue the platform from what it is today, it wouldn't be beyond the norm seeing as social media sites blow up in popularity and decline as needed. The most recent example of this was Tumblr, as the service essentially died once Verizon took the reins and in the midst of some adult content bans according to the Washington Post.


https://www.washingtonpost.com/technology/2019/08/13/tumblr-once-sold-billion-owner-wordpress-just-bought-site-fraction-that/


The one true concern does come from Musk perhaps being blameless for the content that he could or could not provoke on the platform (even though his posting style and content aren't exceptionally extreme enough for this to be plausible currently). Any lawsuits that would stem from this, however unfruitful, would make their way into the media, and would perhaps lead to the platform being devalued due to its owner being unscrupulous.


Social media platforms have cultures. Snapchat has snap streaks and stories, Instagram has making your life look better than it actually is (I'm exaggerating and oversimplifying), and Twitter's culture is one of conversation, however unconstructive it can be sometimes, but more importantly, accountability. What I'm getting at is the first line in the sand in governing Twitter are its users themselves. In its very nature as a discussion platform (you can't communicate as effectively to that many people in two-way communication on ANY other social media site), sentiments surrounding the way Musk is handling the platform would be far more visible to the entire world.


When it comes to ownership of speech, Section 230 of the Communications Decency Act may very well protect Musk, but it does not extend this same protection to the users. The burden of responsibility in effectively utilizing free speech transfers directly to its users in the absence of policies and guidelines that promote respect and couth. This, I believe, barring a devalued Twitter, will become the new status quo for holding content creators accountable for the things they say on Twitter. There's precedent for this in the sense that it's been done in the past: According to the New York Times, an Australian minister successfully conducted a defamation case against a random twitter user that called him a rape apologist.


https://www.nytimes.com/2021/11/24/world/australia/defamation-lawsuit.html


Governments do have the capacity to allow private citizens to exercise justice to the extent of the law. This is clear in the way that Texas handles its abortion legislation through civil (private) enforcement. While the example used is not the most popular by any means, it answers the question of who could possibly bear the burden of protecting (rather than enforcing) free speech on a platform.


https://www.texastribune.org/2022/03/01/texas-abortion-law-supreme-court/

bottom of page