Are you writing a cause and effect essay?

A cause and effect essay is a great example of a school assignment that strengthens students’ analytical abilities and critical thinking because every essay has a purpose and imparts specific knowledge to its audience. Here is a guide that will explain how to write your own cause and effect essay, how to steer clear of common blunders, and how to select a topic. Let’s get going!

Choosing a Topic for Your Essay on Cause and Effect

Finding a topic to write about is the first step in writing any type of essay. It may not be the simplest task to choose a topic for your essay, but a cause and effect essay is an exception. Why? Let’s examine what a cause and effect essay is defined as.

Definition

An academic paper based on facts, events, and actions that result in a certain outcome is known as a cause and effect essay. Such an essay’s goal is to examine the effects and results—also known as the causes and effects—as well as their relationships and potential aftereffects.

Types

Depending on the relationships between various causes and effects, there are various types of cause and effect essays.

This kind of paper illustrates and analyzes how various causes in their variety contribute to a particular result. It is known as a paper with multiple causes and one effect. If you’re writing this kind of cause and effect essay, divide your body paragraphs into a few sections, with each section focusing on a different cause and its result.

One cause and multiple effects is a style of essay that aims to demonstrate how a single cause can have a number of different effects. Researching the subject in-depth and examining the relationships between various factors are requirements for this kind of paper.

The most challenging kind of cause and effect essays is the chain of causes and effects. It is based on a network of interconnected causes and effects from various causes. The relationship between different behaviors and occurrences, as well as how they lead to subsequent causes and effects, ought to be demonstrated.

Be sure to keep the type of essay you are required to write in mind when selecting your essay topic. Not every subject is appropriate for a specific type.

As you can see, it’s not difficult to come up with a topic for a cause and effect essay given the variety of issues and topics that surround us on a daily basis. Here are some suggestions for cause-and-effect papers:

The impact of contemporary music on teen culture.

the effect of fast food on obesity in students in junior high school.

how online bullying affects mental health.

What makes certain videos go viral?

how pet ownership affects people’s health.

Make sure to choose a topic for your essay that speaks to you; this will help you write a thoughtful, in-depth essay. Consider the things you observe in your environment that you can easily link to specific causes. Consider urbanization and its impact on the environment and human health.

How to Structure Your Work

It’s time to start researching your essay topic once you have decided on it in order to find the most reliable sources for your facts and the most recent information on the subject. This type of essay frequently calls for the use of statistics to support your argument; you must only cite trustworthy data from reliable sources.

A cause and effect essay follows a format that is very similar to other types of essays you may have written in the past. It should begin with the introduction section, which includes your thesis statement, then use the body of your paper to support your position with statistics and facts, before summarizing the subject in the conclusion section.

The Beginning

You present some background information and formulate your claim, or thesis statement, at the beginning of your essay, in the introduction. You can also include some statistical data to help your reader understand the significance of the issue you’ve chosen to write about. In fact, a lot of students start their essays off with a statistic as the hook. If you can find some intriguing statistical information, you should use it in the introduction section to pique the reader’s interest.

thesis assertion

Your thesis statement is the central idea of your essay; it should concisely summarize your thoughts and ideas to help the reader understand the focus of the paper. Your main point, the connection between cause and effect, needs to be summed up. For instance:

“Cyberbullying cases have increased as a result of the development of social media; more organizations should be established with a focus on providing psychological support for victims of cyberbullying.”

The Body Sections

Your essay’s body paragraphs should be centered on the cause (or a number of causes), the effect (or a number of effects), and the relationship between them. You can support your arguments with a variety of evidence, including statistics, relevant research, quotes from reliable sources, situation modeling, etc. Your goal should be to draw logical conclusions and approach the problem from various perspectives, not to persuade the reader.

Don’t jump from one thesis, claim, or fact to another in your body paragraphs without a proper transition. Use transitional words in the final sentence of each paragraph to link one to the next.

Additionally, you should make sure that each paragraph in your essay focuses on just one cause (or effect) and has sufficient evidence to back up your claim.

The Final Verdict

Your cause and effect essay’s conclusion will be similar to those for other types of essays you typically write. It doesn’t have any requirements beyond summarizing your essay and your point of view, in other words. The thesis statement should be mentioned as well as the important details from the earlier sections of your essay, though you shouldn’t repeat it; instead, paraphrase it. You should also wrap up your essay with a strong statement; think about suggesting some lines of inquiry for additional study or potential solutions to the issue. For instance, if you are writing an essay on air pollution and how it contributes to global warming, you could end it with some recommendations for how humanity can lessen this negative impact by recycling and finding alternative energy sources.

Shorten your essay

Additional Writing Advice for Cause and Effect Essays

So now you are aware of the fundamental format of a cause and effect essay as well as the subject matter. Here are a few pointers and techniques for writing that you will undoubtedly find useful:

Choose a topic that is not overly mundane. If you are unsure of the topic’s suitability for this type of essay, ask your teacher or professor for guidance.

Use the precise language that is typical for cause and effect essays: because of, as a consequence of, it result(s), consequently, thus, therefore, etc.

Use phrases like “the first cause (effect) is,” “another cause (effect) is,” “one of the causes (effects) is,” etc. if you are dealing with multiple causes or effects.

Maintain the logical flow of your essay by making sure that each and every claim you make does in fact support your thesis.

Make sure you thoroughly research the subject and become acquainted with various viewpoints. If you disagree with the findings of other researchers and have a different opinion, you will need enough reliable evidence to support your claim.

Use several examples to support the causes and effects you discuss in your essay. It’s always helpful to have more than one example to use as evidence in academic writing, so start writing only after you have a sufficient number of reliable justifications.

In the event that you require assistance with a school or college assignment, be sure to contact a professional writing team. Be confident that our subject matter experts can assist you in producing a top-notch cause and effect essay or any other type of academic work. Our experts can write an essay from scratch for you in addition to helping you polish, edit, and proofread your current one.

Make sure to thoroughly proofread your paper before submitting it, and make any necessary changes. Verify your work’s grammar, punctuation, and overall writing style. If you notice that your work lacks credible evidence, you might need to add more details.

For the cause and effect essay, that is all. Our goal is for you to benefit from our guide.

Want to shorten your essay?

The significance of word count and how to lengthen an essay without adding pointless or irrelevant details have already been covered. When you add something to increase your word count, your professor can tell right away. However, they can also tell when you omit crucial details to make your essay shorter. To shorten your essay without having it look cropped, read this blog post.

Shorten your essay


Why Cut Your Essay Short?

Some tasks are designed to teach you how to express your ideas, opinions, or information gathering concisely. You might be surprised to learn that writing a short essay that covers a topic in-depth is also difficult if you think that writing a long essay is difficult.

Every assignment has specific requirements, so if your essay is required to be a certain word count, get ready to follow these guidelines. Although it might seem absurd, an essay’s length can affect its grade, just as it can when it is too short.

How then can you condense your essay without ruining it and erasing sections of it that you spent so much time writing? Here are some excellent advice for you.

Reword Complex Sentences

Sometimes there is a better way to express your ideas and shorten sentences. For instance, using the active voice rather than the passive can greatly assist you with that. Academic writing frequently employs the passive voice because it gives your writing a more objective and professional tone. However, writing becomes more difficult and requires more words to complete a sentence.

The most effective method is to proofread and edit your essay to make sentences shorter and simpler in the situation where you need to eliminate all redundancies. Work to cut down on words that aren’t necessary in longer sentences.

Look for common transitional language and padding phrases. You could completely omit words like “really,” “very,” “in order to,” “just,” and “needless to say,” for instance. Shorter words can be used in place of some longer ones. For instance, “in spite of the” – “even though,” “despite the fact that,” “due to the” – “because,” and “for the purpose of” – “to.”

Examine Your Positions

In order to support your claims in essays, you must address one or more arguments. Each argument has a different degree of applicability when discussing a particular subject. Therefore, whenever you need to condense your essay and cut out a significant amount of text, make sure to keep the best and most persuasive arguments and supporting details while cutting out or downsizing the less significant and secondary ones.

Remove Extraneous Quotations

Quotes can increase the value and objectivity of your essay, but they also increase the word count significantly. They are completely free to be used whenever you need to write a longer essay or when your task is to use a specific quote.

However, if you need to condense your essay, quotes should be the first thing you edit and remove if you believe they don’t add much to the piece. Utilizing outside resources is almost always required, but direct block quotes are not. You can always cite other authors’ works when discussing their viewpoints. Citing sources allows you to develop your arguments and back up your assertions with pertinent data without increasing the overall length of your essay.

Make Certain You Don’t Veer Off Topic

Making your essay simple to read and understand doesn’t always translate into making it more succinct. We enjoy including transitional phrases and ornamental language because it enhances the essay’s artistic quality. academic writing craft. You must, however, take the necessary actions when your assignment is to make your essay shorter.

Make sure you don’t veer even the slightest bit from the subject and central contention of your essay. While describing some illustrative but fascinating examples and cases can demonstrate your familiarity with the subject, it will be difficult for you to edit your paper so that it fits the required word count. Make sure to stay on topic and support your thesis statement with concise, clear sentences.
A Conclusion

These were the most helpful pointers for cutting your essay down without ruining it. We’re hoping you’ll be able to complete that essay assignment—even one as difficult as writing a brief, “straight-to-the-point” essay—with flying colors. And if you don’t, you can always send your request to us, and our team will be happy to assist you, regardless of how many words your essay needs to be.

Are you aware that SAT is going DIGITAL?

Since the SAT has been around for such a long time, American academic culture has become accustomed to it. Numerous colleges continue to use it as a criterion for assessing applicants, and students, parents, and educators all understand its significance and goal.

The College Board has been using the SAT to determine which high school students are eligible to attend college for a century now, and while it still exists today, it has undergone a sizable number of changes. The College Board recently announced that the SAT will go digital by 2024, which may be the biggest change yet. It’s a choice motivated by the desire to increase the SAT’s usability for students and to keep the test current in a time when colleges don’t value it as highly as they once did.

Many colleges and universities temporarily suspended their SAT (or ACT) score submission requirements when Covid-19 forced schools to temporarily close their doors in 2020. Only a few schools have brought back those requirements two years later. But even before the pandemic, many colleges already made the submission of test results optional on application forms.

Many students have continued to take the SAT and made the decision to send their scores to potential schools despite this pattern and the numerous test cancellations brought on by Covid. Vice President of college readiness assessments for the College Board Priscilla Rodriguez reaffirmed the College Board’s dedication to upholding students’ right to submit their test results, saying, “Evidence shows that when colleges consider SAT scores in the context of where students live and attend school, the SAT helps increase diversity. The SAT will continue to be one of the easiest and most affordable ways for students to stand out as we recover from the pandemic.

Is the SAT available online right now?

The custom of using a No. 2 pencil to fill in the bubbles on an answer sheet is one of the SAT’s long-lasting legacies. Over the next two years, everything will change.

Even though the SAT will be administered in a digital format, it won’t be available everywhere. The new SAT is not an online test that students can take whenever they want from wherever they choose. Instead, it will continue to be given either in designated testing locations on weekends or in schools during the week.

The SAT has changed in the past, not just now.

The College Board’s choice to develop a brand-new digital exam is consistent with the changes the organization has made to the SAT Suite of Assessments over the past eight to ten years. The College Board also offers the PSAT 8/9, PSAT 10, and PSAT/NMSQT in addition to the SAT (National Merit Scholarship Qualifying Test). Students in the eighth, ninth, or tenth grade take the PSAT, a condensed version of the SAT. The majority of the assessments in the College Board’s Suite of Assessments are now available in digital format, and have been for a while.

The SAT School Day program was launched by the College Board in 2010 in an effort to give low-income students more access to the exam. Although it took some time for the initiative to gain traction, the College Board now has contracts with 20 states (plus Washington, D.C.) to provide free SAT testing to juniors in high school. Other states offer the test on school days to give opportunities to students who might not have access to the funds or transportation to take it on a Saturday. Some states require the SAT and use it as a standardized assessment. There is already a digital version of the current test being used in many of the states that offer the School Day SAT.

But the 2024 SAT is not simply new because it will be delivered digitally rather than on paper. Additionally, it will incorporate a number of format and design changes aimed at streamlining and simplifying the questions.

Has the College Board not recently made significant changes to the exam?

The parents of today’s students may not be aware of just how different the test is from what they experienced because everyone is trying to keep up with the changes that the College Board has implemented over the past 20 years.

The College Board shifted its focus to writing in 2005 after finally allowing calculators and introducing student-produced (i.e., not multiple-choice) questions in 1994. An essay where students responded to a topic by formulating and defending a thesis was added in order to reflect the significance of clear and effective writing. 1600 points were changed to 2400 total points, with 800 points each for reading, math, and writing.

The new format, however, was short-lived. The College Board sought to develop a test that better reflected the actual work students were doing in high school, driven by the vision of David Coleman, who took over as CEO in 2012. Coleman said, “We are not interested in students just picking an answer, but justifying their answers,” after becoming frustrated with the way students had been able to boost their test scores using test-taking “tricks.”

In place of its previous design, the College Board adopted a more college-readiness-focused approach in 2016.

The new test had four sections: Reading, Writing and Language, No Calculator Math, and Calculator Math, as opposed to multiple short sections for each subject.

All of the questions in the Reading section now relate to passages that were provided in the test, and vocabulary questions—possibly the last remnant of an intelligence test—were eliminated.

The 1600-point scoring system was reinstated, with 800 points awarded for Evidence-Based Reading and Writing (a combination of the Reading, Writing, and Language sections) and 800 points awarded for Math (a combination of both Math sections).

The essay was revised, but it was made optional. Students had to sign up in advance to take it, and essay scores were computed independently of other test results. The essay was completely eliminated from the test by 2021, though some states still use it for their own assessment procedures.

What modifications are being made to the actual SAT?

The SAT is undergoing significant change, but some aspects will not change. The digital SAT Suite will continue to measure the knowledge and skills that students are learning in school and that are most important for college and career readiness, according to Priscilla Rodriguez in an official video statement. Only at a school or testing facility (not at home) will the test be given, and Khan Academy will still provide free online practice materials.

Students who require special accommodations can get them because the College Board is dedicated to making the exam accessible to all students. The accustomed 1600-point scoring system will continue to be used, and significant scholarship opportunities will still be based on student performance.

What precisely is changing with the SAT, then?

Shorter passages with only one question associated with them will replace the lengthy passages with 10–11 questions each in the Reading section. The passages themselves will cover a wider range of subjects to better reflect what college students read.

Less wordy math questions will be included, and the entire Math section will allow the use of calculators. Students may still bring their own SAT-approved calculators, but an app based on the online Desmos calculator will be available.

From timing to navigation, everything the College Board does with the digital design aims to make the test more user-friendly. Students will have more time to answer each question because the SAT will now only last two to two and a half hours. The digital exam will also have a timer that appears on the screen and a feature for marking questions for later review. Instead of waiting weeks to receive their results, students will now be connected with nearby two-year colleges, workforce development initiatives, and career options via their score reports.

Why would there be such a large change now?

The SAT is still crucial, regardless of how colleges have changed their views on test scores on applicants’ applications. The SAT was still popular among students even after schools became test-optional in 2020 and Covid frequently caused test cancellations.

Colleges should allow students to submit their SAT scores.

In a poll conducted by the College Board, 83% of students said they would like the option to submit their test results to colleges.

Priscilla Rodriguez noted that despite an increasing number of colleges making the SAT optional, students still see the scores as a valuable addition to their applications: “Some students may decide their application is stronger without test scores, while others will benefit from sending them, including the hundreds of thousands of rural, first-generation, and underrepresented students whose SAT scores strengthen their college applications.”

More students can take the SAT because it’s available online.

More students will be able to take the SAT now that it will be offered in digital format. The College Board, which aims to remain relevant in an academic culture that increasingly views the SAT as an optional component of college applications, gains from this as well as students. The SAT will be simpler to administer because the new digital test will be considerably shorter than the current one. States, districts, and schools will have more options for when, where, and how frequently they administer the SAT as opposed to being constrained by a set schedule.

The shift to digital applications is already happening because some schools already offer digital SAT and PSAT versions. The International SAT in March 2023 will mark the debut of the new design on a grand scale. All PSAT tests will switch to the new digital format by the fall of 2023. The entire SAT Suite of Assessments will be available digitally in the updated format starting in March 2024.

How will the updated SAT be more user-friendly?

The College Board has concentrated on students’ access to technology and SAT preparation materials after realizing that the majority of students already complete a sizable portion of their schoolwork on digital devices. Through Khan Academy, students can access official practice exams without having to pay for costly resources or preparation courses.

Since many students take their own devices to practice for the SAT, taking the actual test on those same devices will feel more familiar than responding to questions on paper. Students will be able to access the actual SAT on their own laptop or tablet via a secure server. The College Board will lend a device to anyone who needs one if they don’t have one on test day or their school doesn’t have any. Additionally, students will be able to plug in and reconnect without losing any time or work if Wi-Fi goes down or a device dies.

What are the SAT changes’ main advantages?

The College Board ran a global pilot of the new SAT in test centers in the US and abroad in November 2021. The test was overwhelmingly supported by students, educators, and test administrators, who praised its improved design and straightforward administration.

The examination will go more smoothly.

The initial procedures are more effective than what is currently in place. Students won’t have to spend time checking in and filling out paper forms, and educators won’t have to deal with packing, sorting, or shipping test materials without the need for physical copies. Students won’t have to stress about remembering to pack pencils, calculators, or even digital devices because all the materials are provided.

The examination will be more practical.

Positive student feedback on the test itself was particularly noteworthy. The ability to switch between the questions, an online calculator, and a formula sheet allowed students to say that having the material available digitally was more convenient. It was much simpler to keep track of the remaining time thanks to the built-in clock: “This reduced my stress as I knew I had enough time to think critically about the question rather than panicking and choosing a random option when I was unsure of what I had just read.”

I loved that I could go back to questions that I had flagged, since usually on paper I take extra time to find the questions I had missed. Students clearly appreciated the feature allowing them to flag questions for review. The ability to mark one’s response quickly and confidently was incredibly relieving, according to test-takers, who also found the process of answering to be much more efficient than what is required on the paper-and-pencil test.

The examination will be more efficient.
The updated questions and simplified structure received more praise. The newly redesigned Reading section was well received: “I really liked reading the brief passages and responding to just one question. That lessened my anxiety. The reading passages were “much better and more interactive than the long reading texts on the current SAT,” according to English language learners. Students were able to analyze and respond to the questions in the math section, which they felt “were shorter, made sense, and got straight to the point.”

The examination will be safer.

Finally, the simplicity and security of the new SAT are appreciated by test administrators and proctors. The test center coordinators “didn’t have to spend another half hour at the test center just to make sure that things are done,” as there were no forms to complete, paper tests to package, or answer forms to fill out. With the current paper-and-pencil test, the test administration can be cancelled or student scores can be disregarded if even one test form is compromised.

With the SAT going digital, each student will also receive a distinct test form, making it nearly impossible to share answers.

Why are students advised to take the SAT?

Parents and students may understandably wonder why it is necessary to take the SAT since the vast majority of colleges and universities do not require SAT scores as part of the college application process. The SAT is not irrelevant just because it isn’t a requirement for an application.

Not whether you must take the test, but rather what advantages your SAT scores will bring, is the real question.

If the SAT is optional at the majority of colleges, why does it even matter?

According to the College Board, of the 1.7 million students in the class of 2020 who took the SAT, approximately 300,000 were from rural or small towns, 600,000 were first-generation college students, and 700,000 belonged to an African American or Latino racial or ethnic group. The SAT continues to “open doors for students of all backgrounds,” according to Rodriguez.

62% of students in the class of 2021 who took the SAT were able to do so for free on a weekday at their school. Independent studies demonstrate that universal testing during the school day increases the percentage of low-income students who enroll in college. Rodriguez uses her own experience as an immigrant kid who arrived in the U.S. on a shoestring budget to highlight the importance of the SAT to these students: “I know how the SAT Suite of Assessments opened doors to colleges, scholarships, and educational opportunities that I otherwise would have known about or had access to.”

Finally, even though there are more than 25,000 high schools in the United States, the SAT is still an objective test that students can take. There is no way that colleges could be familiar with all of those high schools and have seen every student there. It is becoming more difficult to decide which students should be accepted into colleges as grade inflation continues to rise in American high schools.

The percentage of students graduating from high school with an A average has increased from 39% in 1998 to 55% in 2021, even though high school grades continue to be the most widely accepted indicator of a student’s ability. Colleges need more data points to make their decisions when more than half of students receive As.

That means extracurricular activities for many students, but clubs and sports are frequently expensive and out of reach for many families. When taken into account, test results can support a student’s GPA and reveal strengths that go beyond what grades might suggest.

According to preliminary findings from the College Board’s pilot program, the new digital SAT will give students a better opportunity to demonstrate their knowledge and stand out on college applications.

Climate Change Essay

Academic Discipline: Geography
Course Name: Geography
Assignment Subject: Mitigating against climate change
Academic Level: Undergraduate-1st Year
Referencing Style: Harvard
Word Count: 2,119

Climate change is a man-made phenomenon that threatens the survival of local and global ecosystems. According to the United Nations Environmental Program (UNEP), “the global average temperature in 2019 was 1.1 degrees Celsius above the pre-industrial period”, which has affected the planet through higher levels of GHG emissions, and the increased magnitude and frequency of extreme weather events. The international community has pioneered a range of climate change mitigation agendas, which stress the importance of a multi-stakeholder approach that involves governments, corporations, households and individuals alike. The purpose of this expository research is to explore some of the climate change mitigation strategies that can be used at the governmental level, identify their relative advantages and disadvantages and evaluate which strategy is optimal for governments. It will be argued that the adherence to international climate change agendas and the imposition of a carbon tax are the most effective mitigation measures.

Climate Change Mitigation Strategies
Governments are among the leading stakeholders who can facilitate climate change mitigation, primarily because they have the political legitimacy and authority to implement policies that can constrain the behaviour of other social actors- such as corporations, households and individuals. The first broad climate change mitigation approach is the imposition of a carbon-tax, which charges consumers for fossil fuel consumption. The carbon tax system has been empirically used in developed and developing countries alike, including in countries such as Canada, Japan, Mexico and South Africa (Lai 2021).

The first advantage of a carbon-tax system is that it is a demand side policy that seeks to address climate change by reducing the level of demand for fossil fuels. Imposing a tax on fossil fuel users therefore acts as a disincentive that can be applied across all consumer categories (Ionescu 2019). The second advantage of a carbon tax system is that it can be uniformly applied to all consumers, intimating that its standardization potential increases the scope of application (Kutasi and Perger 2015). The third advantage of a carbon-tax system is that it encourages users to adopt alternative energy forms, in order to curtail the cost associated with fossil fuel consumption. This implies that the carbon tax system can spur innovation insofar as creating an incentive for users to find alternative, renewable forms of energy for their daily needs (Ionescu 2019).

Notwithstanding the advantages associated with carbon-tax, it is also fraught with challenges that may undermine its viability to drive climate change mitigation at the global level. First, the carbon tax is perceived as a regressive tax, in that it imposes the highest burden on low-income earners (Osman et al. 2021). That is, since fossil fuels become more expensive, low-income households that lack alternative sources of energy will need to spend a higher proportion of their disposable income on the tax, such that they will be worse off than their more affluent social counterparts. Second, the carbon tax does not adequately capture the environmental cost of pollution in both the immediate and the long term. The implication is that governments may be uncertain about the accurate tax rate, leading to a scenario where the tax may be too low or too high- both of which would negatively affect climate change mitigation (Whithey et al. 2022). Third, the carbon tax system creates an incentive for tax evasion, such that wealthy individuals, households or corporations may decide to offshore carbon production to developing economies with less stringent carbon tax measures. The implication is that countries may fail to uniformly apply the carbon tax- thereby undermining the global climate change mitigation agenda.

The second broad strategy that has been used to combat climate change is the cap-and-trade policy. The system essentially provides polluters with a carbon or emissions quota, which they are legally bound to operate within. In instances where a polluter has emissions remaining on their quota, they are free to sell their quota to a third party who may have exhausted theirs (Rabe 2016). The cap-and-trade system is arguably preferred to the carbon tax system, primarily because it provides a degree of autonomy to users. The first advantage of the cap-and-trade system is that carbon consumers can meet their production needs while simultaneously adhering to the expected levels of emissions. In other words, corporations can maintain their production levels, and keep emissions below or at the legally mandated levels (Rabe 2016). The second advantage is that the system provides flexibility for carbon consumers, which presents a sharp contrast to the tax regime which has a higher level of constriction among users. Third, the cap-and-trade system is beneficial because it provides a valuable revenue stream for governments, which can then redirect the financial resources towards other climate change mitigation projects (Liu and Li 2017).

Despite the range of benefits associated with the cap-and-trade system, the climate change mitigation approach also has limitations that warrant consideration. First, the system is arguably biased towards corporations and social actors with high revenues, as these actors can emit beyond their respective quotas by purchasing carbon credits from users who have lower total emissions. Second and relatedly, the system is disadvantageous for smaller actors who may lack the necessary financial liquidity to purchase additional credits to meet their own production quotas. Third, the system can be problematic in that it does not necessarily encourage the development of alternative energy resources, as corporations would still be permitted to emit greenhouse gasses (Zhang et al. 2021). The implication is that this approach is not a viable long-term solution to eliminate carbon dependency within the economy, as it simply confers a price to emissions. Finally, the cap-and-trade system can be problematic insofar as increasing the price of fossil fuels (Liu and Li 2017). The implication is that corporations may face higher operational costs in the medium to long term, which can negatively affect production and performance at the aggregate level.

The third broad strategy that can be used by governments to mitigate against climate change is creating incentives for social actors to switch towards renewable forms of energy. For instance, governments can provide funding for renewable energy research and development (R&D); impose import restrictions on fossil fuels and provide tax cuts among other incentives for corporations that are developing renewable energy (Michaelowa 2004). The first benefit of an incentive system is that it reduces the level of carbon dependency within the economy, thereby presenting a viable long-term avenue to mitigate against climate change (Dale et al. 2020). The second benefit of an incentive system is that it fosters competition among the different actors, thereby creating multiple solutions that can thereafter be tailored to meet the specific needs of energy consumers and producers. The main disadvantage of an incentive system is that it does not necessarily affect the current levels of GHG emissions and fossil fuel dependency, such that consumers may continue relying on fossil fuels while developing renewable energy alternatives. Another potential limitation of the approach is that it has high utility for industrialized and affluent countries, whose governments possess the necessary financial resources to provide adequate incentives for renewable energy. In other words, the approach may have limited utility for developing countries, thereby contravening the ability to achieve sustainable development objectives at the global level (Sforna 2019).

The fourth climate change mitigation approach that governments can use is the development of sustainable development goals, which provide a framework for social actors. The international community has made substantial progress in this regard, through the establishment of sustainable development goals (SDGs). The first advantage of an international SDG framework is that it is tailored to the needs and realities of each country, such that the SDGs that are applicable to industrialized economies are not necessarily mirrored by the goals applicable to developed economies (Sachs 2012). This is beneficial because it enables each country to set its own climate change mitigation agenda, based on a thorough understanding of its internal strengths, capabilities, resource endowments, and constraints. The second advantage of the SDGs approach is that the goals are often measurable and tangible, which increases the ease of tracing progress made in achieving the identified goals (Sachs 2012). The third advantage of the approach is that it is a comprehensive framework for combating climate change, as the attainment of SDGs necessarily requires cooperation and collaboration from a range of stakeholders in the country. For instance, in order to achieve the SDG of providing potable water to local communities, the stakeholders that can contribute to this objective include corporations, nongovernmental organizations, local communities, households and individuals. Finally, the SGD approach is beneficial because it aligns environmental responsibility with a range of social and human development indicators, thereby highlighting the relationship between environmental, human and economic development indicators (Pandey 2017).

The SDGs have been lauded as one of the most important international agreements of the contemporary context. However, to date, no country has yet achieved all the stipulated objectives. Therefore, the first limitation of this approach is that it can be perceived as overly ambitious, which negates the validity of the approach in meeting sustainable development objectives (Sachs 2012). The second limitation is that most SDGs highlight structural problems that are faced by local economies yet lack any structural level solution that can be leveraged to achieve the objectives. For instance, one of the SGDs is the elimination of poverty and hunger. The complexity of poverty in any context highlights the importance of structural level factors, for instance social determinants of health, institutional processes of discrimination, and the marginalization of vulnerable populations. Against this backdrop, the SDGs still lack structural level solutions, for instance in terms of policies that can be adopted to improve the global capitalist system and consider the needs of vulnerable populations in the Global South.

Analysis
The focus of this analysis has been on evaluating climate change mitigation strategies, specifically those that can be implemented by governments. The research has identified four broad strategies that can be used- namely the imposition of a carbon tax; the introduction of a cap-and-trade system; the creation of incentives to improve adoption of renewable energy alternatives; and adhering to international climate change mitigation accords such as the SDGs. Each of the identified strategies is associated with both advantages and disadvantages. The purpose of this section is to evaluate which of the alternatives is the best approach for government agencies, based on the discussion conducted in the previous sections.

The objectives of a viable and comprehensive climate change mitigation approach should encompass both the reduction of current levels of emissions, as well as fostering higher adoption rates for renewable energy alternatives. Based on this assertion, the inferiority of climate change mitigation solutions is the cap-and-trade system. This is because the approach not only permits carbon emissions (thereby failing to reduce the actual level of emissions) and lacks any direct orientation towards the promotion of renewable energy resources (Liu and Li 2017). An equally inferior approach is the incentive approach, as this is considered a medium to long term plan that fails to address current levels of GHG emissions, albeit while fostering innovation and development of alternative renewable energy resources.

The adherence to international frameworks such as the SDGs is laudable because it provides each country with a unique set of objectives that can be used to address the climate change concern (Sachs 2012). Moreover, the approach is also laudable because of its multi-stakeholder orientation, as that facilitates a thorough implementation that involves a broad base of social actors. The carbon tax system is widely unpopular especially among economists and political commentators, but has notable benefits that highlight its potential as a climate change mitigation tool. Specifically, the carbon tax system is beneficial because it punishes fossil fuel consumption in the immediate term, while creating a disincentive that can propel social actors to divert towards renewable forms of energy. The implication is that governments should be more invested in climate change mitigation approaches that are consistent with international goals; have the capacity to reduce fossil fuel dependency in the immediate term, while creating an incentive for the widespread adoption of renewable forms of energy.

Conclusion
In conclusion, the purpose of this analysis was to consider the range of climate change mitigation tools that are typically used by governments across the globe. The intent was to evaluate the advantages and disadvantages of each approach, prior to offering a recommendation for the best approach that governments can use. It was argued that governments should rely on measures that are consistent with international goals; have the capacity to reduce fossil fuel dependency in the immediate term, while creating an incentive for the widespread adoption of renewable forms of energy. This in turn highlights the carbon tax and SDGs as the most appropriate and comprehensive approach to climate change mitigation.

Reference List:

Dale, A., Robinson, J., King, L., Burch, S., Newell, R., Shaw, A. & Jost, F. 2020, “Meeting the climate change challenge: local government climate action in British Columbia, Canada”, Climate Policy, vol. 20, no. 7, pp. 866-880.

Ionescu, L. 2019, “Climate Policies, Carbon Pricing, and Pollution Tax: Do Carbon Taxes Really Lead to a Reduction in Emissions?”, Geopolitics, History and International Relations, vol. 11, no. 1, pp. 92-97.

Kutasi, G. & Perger, J. 2015, “Tax incentives applied against externalities: International examples of fat tax and carbon tax”, Society and Economy, vol. 37, pp. 117-135.

Lai, O. 2021, “What Countries Have A Carbon Tax?”. Retrieved May 10 2022 from https://earth.org/what-countries-have-a-carbon-tax/

Liu, H. & Li, Z. 2017, “Carbon Cap-and-Trade in China: A Comprehensive Framework”, Emerging Markets, Finance & Trade, vol. 53, no. 5, pp. 1152-1169.

Michaelowa, A. 2004, “Firms, Governments and Climate Policy: Incentive-Based Policies for Long-Term Climate Change”, International Affairs, vol. 80, no. 2, pp. 382-384.

Osman, M., Schwartz, P. & Wodak, S. 2021, “Sustainable consumption: What works best, carbon taxes, subsidies and/or nudges?”, Basic and Applied Social Psychology, vol. 43, no. 3, pp. 169-194.

Pandey, S. 2017, “The Road From Millennium Development Goals to Sustainable Development Goals by 2030: Social Work’s Role in Empowering Women and Girls”, Affilia, vol. 32, no. 2, pp. 125-132.

Rabe, B.G. 2016, “The Durability of Carbon Cap-and-Trade Policy”, Governance, vol. 29, no. 1, pp. 103-119.

Sachs, J.D. 2012, “From millennium development goals to sustainable development goals”, The Lancet, vol. 379, no. 9832, pp. 2206-2211.

Sforna, G. (2019). Climate change and developing countries: From background actors to protagonists of climate negotiations. International Environmental Agreements: Politics, Law and Economics, 19(3), 273-295.

United Nations Energy Program. N.d. “Facts About the Climate Emergency”, Retrieved May 10 2022 from https://www.unep.org/explore-topics/climate-action/facts-about- climate-emergency

Withey, P., Sharma, C., Lantz, V., McMonagle, G. & Ochuodho, T.O. 2022, “Economy-wide and CO2 impacts of carbon taxes and output-based pricing in New Brunswick, Canada”, Applied Economics, vol. 54, no. 26, pp. 2998-3015.

Zhang, G., Zhao, L., Zhang, Q. & Zhang, Z. 2021, “Effects of Socially Responsible Behaviors in a Supply Chain under Carbon Cap-and-Trade Regulation”, Discrete Dynamics in Nature and Society, vol. 2021, pp.1-18.

Accounting for the First World War

Academic Discipline: History
Course Name: History
Assignment Subject: Accounting for the causes of WW1
Academic Level: High School-Grade 12
Referencing Style: Chicago
Word Count: 2,023

The First World War was historically unprecedented in terms of the advanced warfare tools, casualty rates, and involvement of multiple nations from different continents. Although the onset of the war is largely attributed to the 1914 Sarajevo assassination, the causes of the war pre-date the events of 1914. The purpose of this paper is to discuss the causes of the First World War, to ascertain the relative importance of cumulative factors that led to the war. It will be argued that while several factors contributed to the outbreak of war, the single most important factor was the alliance system.

The first major cause of the First World War was the alliance system. On the one hand, Germany, Austria-Hungary, and Italy formed the central powers or Triple Alliance, while on the other hand, the Allies or Triple Entente comprised France, Russia, and Britain. The alliance system contributed to the onset of war for three main reasons. First, the system was based on distinct alliances or partnerships that dictated the unfolding of war, as an attack on one member of a faction was necessarily perceived as an attack on all members belonging to the faction in question (Alpha History 2021). Second, the alliance network divided great powers across two distinct camps, thereby essentially pitting the two groups against each other in the battle to control Europe and the rest of the world. Third, the alliance system was based on the mandate to protect mutual interests, in turn perceived as mutually exclusive with the strategic interests of the other camp (Alpha History 2021).

In addition to the alliance system, Europe was engulfed in an arms race in the years leading to 1914. The arms race was characterized by the heavy militarization of both the Central powers and the allies, with the intent to create naval, air force and standing army capacities that would rival and dominate the other (Maurer 1997 p.286). The arms race was facilitated by the industrial revolution and subsequent technological advancements, which paved the way for the manufacture of semi-automatic rifles, tanks and other warfare tools. The arms race contributed to the onset of the First World War in three distinct ways. First, the arms race resulted in the creation of war equipment that had the inherent capacity to devastate entire populations and destroy cities to the ground- which increased the likelihood of a total war among the warring parties. Second, the arms race was fuelled by a spirit of competition, thereby intensifying the rivalry that existed between the Triple Alliance and the Triple Entente (Maurer 1997 p.286). Third, the arms race provided each faction with a legitimate prospect for victory over the other, thereby exacerbating tension as neither party was likely to back down from a military confrontation with the other.

The third contributing factor of the First World War was the influence of nationalism and the empire building mentality among European statesmen. Since the unification of both Italy and Germany in the late 19th century, both countries had expressed an intent to maintain their territorial empires within Europe- especially staving off potential attacks from some of the already established powers- such as Britain and France (Williamson 1988 p.795). Nationalism and the empire building mentality contributed to the onset of war in four major ways. First, it created a hegemonic power battle between Britain and France on the one hand, against Germany and Italy on the other, as the latter powers sought to increase the diplomatic and economic importance of their respective countries in Europe and beyond. Second, nationalism fuelled the sentiment that each country was virtually on its own, and that the ability to protect its strategic interests was paramount to maintain relevance in the restructured European order. Third, the empire building mentality informed the sentiment that each country had to expand its physical geographic boundaries to increase prospects of successfully staving off an attack from real and perceived enemies. Finally, nationalism and empire building both contributed to the balance of power crisis that Europe experienced at the time, as a range of competing countries with comparable economic and military strength were vying for the opportunity to dominate Europe.

The fourth factor contributing to the eruption of war was imperialism. All major countries in both the Triple Entente and the Triple Alliance had overseas empires that were useful in fuelling the industrialization drive in their respective domestic economies, in addition to providing critical resources that could be used for armament among other initiatives. Imperialism’s first contribution to the war was intensifying the rivalry and hostility between the European powers, for instance illustrated by the fight for territories in the Scramble for Africa (Weinstein 2000 p.11). Second, imperialism fed the notions of nationalism and empire building, thereby providing an additional incentive to maintain the tension and hostility between European powers. Third, imperialism enabled the European great powers to amass resources from their colonies, which were used for industrialization, and the manufacture of advanced weapons. Therefore, in the absence of colonies and imperialism, industrialization may have occurred much later, which would have also ostensibly delayed the onset of war.

An arguably more proximate cause of the war was the Balkan crisis. Prior to 2014, Austria-Hungary had tension with Serbia over the latter’s desire to unify all Slavic people including some populations residing in Austria-Hungary (Alpha History). Austria-Hungary perceived the Serbian intent as a direct assault on its nationalism and empire building imperatives, thereby increasing the tension between the two in the region. With the assassination of the Archduke Franz Ferdinand by a Serbian extremist, Austria-Hungary capitalized on the opportunity to not only redress the issues experienced with Serbia. The Balkan crisis was therefore not only an ongoing tension between Austria-Hungary and Balkan states, but one that served as the final impetus for war.

This paper has thus far argued that the First World War occurred because of cumulative factors, specifically the alliance system; the arms race; nationalism and empire building sentiments; imperialism and the Balkan crisis. The next important question pertains to which of the identified factors can be deemed most influential and important in leading to the First World War. This section will argue that the alliance system was the most important, single contributor to the First World War, primarily because each of the other identified factors were not sufficient causes to single-handedly lead to the eruption of war. The relative importance of each factor will be considered separately, prior to defending the identified thesis.

As noted earlier, the arms race was critical in setting the stage for war, as it created the impression that each country had the capability to not only stave off an attack from an enemy, but also successfully launch an attack for its own strategic purposes. However, the arms race was insufficient as a single cause for the war. This is because both historically and in the contemporary context, countries have always had different levels of armament, and the imperative to create a military force that can successfully quell an external attack has existed since time immemorial. The implication is that the armament race was necessary insofar as providing the tools for war, although it remained insufficient to cause the war between European powers.

Similarly, it was established that nationalism and empire building sentiments were critical in driving the tension that eventually erupted into war. However, like the arms race, nationalism and empire building were not stand-alone causes for the war. First, nationalism is arguably an important aspect of any country, both historically and in the contemporary context, as it provides the basis for national unity, a sense of belonging and national pride. Therefore, nationalism did not cause the war. Instead, the leading statesmen in both the Triple Alliance and Triple Entente manipulated nationalistic sentiment to create a zero-scum game in Europe. That is, nationalism was used as a vehicle to fuel the othering of external societies, to an extent where nationalism was perceived as synonymous with defending one’s country against enemies. In similar token, empire building may not have necessarily led to war, primarily because each great power had a subtle respect and acceptance of the empire building ambitions of the other. For instance, Germany’s unification process led to its acquisition of Alsace-Lorraine from France. Although France was not impressed with the cessation of its territory after the Franco-Prussian war, this was insufficient to make France declare war on Germany to gain back its territory. The implication is that empire building did lead to the exacerbation of tension among the great powers, although it did not single-handedly lead to the eruption of war.

Imperialism was equally important insofar as exacerbating tension and hostility among European powers, especially in contexts where they were vying for control over the same colonial territory. The first reason imperialism did not single-handedly lead to war is because it had existed for decades prior to the First World War, intimating that at the very least, it was not a proximate reason for the war. Second, imperialism was an all-European affair, in that each country was virtually capable and permitted to annex foreign territories to fuel industrialization or other agendas in the domestic context. For instance, although Britain and Germany were virtually enemies by 1914, both had territories in Africa which were acknowledged and respected as such despite the rivalry between them. This demonstrates that imperialism was not a zero-sum game, and the ability for each European power to exercise it implied that it was not a proximate cause for the war.

Of all factors identified as contributing to the onset of war, the factor that could have single-handedly led to the eruption of war was the alliance system. First, the alliance system created a reality in which an attack on one was perceived as an attack on all, implying that any skirmishes had the potential to bring Europe to war. Second, the Balkan crisis and the Sarajevo assassination both revealed the significance of the alliance system in leading Europe to war. This is because Austria-Hungary had to secure support from the Triple Entente, while Serbia received the protection of Germany, prior to making any official declarations of war (Alpha History). In other words, the Sarajevo assassination assumed importance and significance because the conflicting parties were assured of the intervention of their allies should they engage in warfare- thereby underscoring the significance of the alliance system in leading to war. Finally, the alliance system was important because the war involved most of Europe, and eventually the United States and Japan- demonstrating that in the absence of camps, war may have been inevitable on a smaller scale. However, the magnitude of the war and the involvement of multiple parties demonstrated that the alliances created formal allegiances that had to be honored, in turn leading to the involvement of multiple countries and setting the stage for a historically unprecedented conflict.

In conclusion, this paper sought to identify the causes of the First World War and isolate at least one cause that may have unilaterally resulted in the war. It was argued that the main factors contributing to the onset of war included the arms race; the political significance of nationalism and empire building sentiments; imperialism; the alliance system; and the Balkan crisis. Moreover, the paper argued that of all identified factors, the single most important determinant of the war was the alliance system. It is important to stress that each of the identified factors were all significant in leading to the First World War, and that the complexity of the war is such that it cannot be attributed to one single factor. This paper demonstrated that while all identified factors contributed to the eruption of war, the alliance system was the most significant culprit. This is because in the absence of the alliance system, war may have been inevitable between some great powers, but its magnitude and effect would have been equally on a smaller scale. Therefore, the alliance system forged networks of allegiance that had to be honored, in turn explaining why the First World War involved several countries, was fought in many countries, and led to casualty rates of more than 20 million.

Bibliography
Alpha History. 2021. “Alliances as A Cause of WW1”. Accessed May 10 2022 from https://alphahistory.com/worldwar1/alliances/

Maurer, John H. 1997. “Arms Control and the Anglo-German Naval Race before World War I: Lessons for Today?” Political Science Quarterly 112 (2): 285-306.

Weinstein, Jeremy M. 2000. “Africa’s “Scramble for Africa”: Lessons of a Continental War.” World Policy Journal 17 (2): 11-20.

Williamson, S. R. 1988. “The Origins of World War I.” The Journal of Interdisciplinary History 18 (4): 795–818.

Stress Management Strategies for Nurses: Expository Essay

Academic Discipline: Nursing
Assignment Subject: Stress management strategies for nurses
Academic Level: Undergraduate-2nd Year
Referencing Style: MLA
Word Count: 2,055

The COVID-19 pandemic demonstrated the physical strain and peril that nurses expose themselves to in providing high quality care for patients. However, what has been more subtle is the psychological toll that nurses experience through their profession, which can create health related challenges for the nurses, dilute the quality of care provided to patients, undermine best practices in the nursing profession, and lead to burnout among other undesirable realities (Lopez-Lopez et al., 1032). Stress management is therefore a critical imperative for the nursing profession, as it can equip nurses with the necessary tools to manage the psychological burden associated with their profession. The purpose of this paper is to explore some of the stress coping strategies that nurses can pursue, thereby providing some practical utility to the nursing profession, in addition to individuals who experience stressful working, living and learning environments. Moreover, the paper will evaluate how healthcare institutions can facilitate productive stress management techniques among nurses.

The first strategy that nurses can use to cope with stress is mindfulness. This approach can assume the form of a reflective diary or meditation. According to research, mindfulness is useful in coping with stress as it enables the individual to be fully aware of their environment and self, such that they desist from overreacting to stimulus or feeling overwhelmed (Lee et al., 87). Nurses can adopt mindfulness to not only become fully immersed in their roles and duties, but also to ensure that they maintain level headedness when confronted with stress triggers. The mindfulness example of a diary can also be useful in improving the self-awareness of nurses, such that they become more attuned to their own stress triggers, current coping strategies, the impact of the coping strategies on their ability to perform expected duties, and the possibility of adopting alternative coping strategies that would enhance their well being (Lee et al., 87).

Healthcare institutions can facilitate mindfulness training as an institutional practice, or by offering individual nurses with mindfulness resources. With regards to the former, healthcare institutions can offer seminars, talks, reading materials, and/or learning modules for their nursing staff. This approach would be beneficial insofar as standardizing mindfulness training as a key component in guaranteeing the mental and physical safety of nurses (Mahon et al., 572). In relation to the latter, healthcare institutions can offer mindfulness training packages upon request by the nurses, based on the rationale that some nurses may prefer alternative stress coping strategies and should be permitted to choose the approach that resonates best with their personality, background and other personal factors.

The second strategy that nurses can use to cope with stressful work environments is by making positive lifestyle choices. Specifically, nurses should be encouraged to maintain a healthy diet, exercise regularly, and ensure they have adequate time to rest and sleep (Pelit-Aksu et al., 1095). Research has noted that fatigue is one of the most common causes of poor performance in the healthcare sector, which underscores the importance of a healthy lifestyle among the nursing profession (Barker & Nussbaum, 1370). The significance of a healthy lifestyle is that it alleviates any fatigue related determinants of stress, while simultaneously ensuring that nurses can maintain a positive health outcome that improves their ability to administer quality care to patients. The main potential drawback of this approach is that nurses often have busy and/or unpredictable schedules, which can undermine their ability to establish an exercise routine- among others (Barker & Nussbaum, 1370).

Healthcare institutions can facilitate the adoption of healthy lifestyles among nurses in several ways. First, institutions can offer important information on dietary guidelines, that the nurses can thereafter tailor to personal tastes and preferences. Second, healthcare institutions can invest in building gyms and other recreational facilities on site, which would enable nurses to exercise according to their schedules, and with maximum convenience. Third, institutions can provide nurses with access to dieticians and personal exercise trainers, who can collaborate with nurses to create a schedule that fits with the goals and schedules of each individual nurse. With either approach, it is important to stress that healthcare institutions can play a useful role in creating supportive environments that improve nurses’ ability to maintain healthy lifestyle patterns as a strategy to cope with stressful working environments.

The third strategy that nurses can use to manage stress is by freely communicating their challenges to the matron, chief nursing officer, or chief nursing executive. The rationale for this approach is that nurses often feel overwhelmed by practices or expectations that are within the control and purview of the matron, implying that the latter can offer assistance or advice that can be useful to the nurse. For instance, during the pandemic, a common challenge experienced by nurses was the burden of working overtime- necessitated by the high demand for healthcare services within the general population (Aggar et al., 91). Qualitative research indicated that nurses who communicated their experiences and feelings to the matrons had better stress coping strategies, as the matrons were often able to either remove the stressor entirely (for example by hiring more temporary nurses) or offer advice that can improve the stress coping mechanisms used by the nurses (Aggar et al., 91; Zhang et al., 1584).

The establishment and maintenance of open communication channels between the nurses and matrons is contingent on the policies adopted by the healthcare institution in question. First, it is imperative to appoint matrons who are interested in the welfare and wellbeing of nurses, implying that there should be a balance between their supervisory responsibilities and administrative duties. Second, it is important to ensure that nurses are actively encouraged to seek support, guidance and assistance from the matron, for instance by creating roundtable dates where nurses openly communicate with the matron on a range of issues affecting their professional or personal wellbeing. Third, it is important that the upper levels of the nursing hierarchy adopt a leadership style that is conducive to open communication channels with the nursing staff. According to literature, the transformational leadership style would be useful, as it is contingent on empowering followers to take initiative and assume an active role in resolving challenges faced within the institution (Echevarria et al., 168). Similarly, the servant-oriented leadership style would be equally useful as it would place the needs of the followers (nurses in this instance) as a priority for the leadership (Kul et al., 1168). Moreover, qualitative research has provided support for the assertion that the leadership style adopted by the upper hierarchy in healthcare institutions is a critical determinant of the culture adopted at the organization, which then has either a direct or indirect influence on the health and wellbeing of all organizational members. (Echevarria et al., 168).

The fourth strategy that nurses can adopt to cope with stress is seeking peer support. According to Li et al. (204), the challenges faced by nurses over the course of performing their duties are often applicable to fellow nurses. This implies that seeking peer support would be useful, as a nurse can benefit from feeling understood and being provided with actionable advice that can be used to alleviate stress (Li et al., 204). Peer support is also useful because it demonstrates that the nurse is not experiencing unique or novel stressors in the workplace environment, thereby potentially reducing feelings of helplessness and being overwhelmed. Peer support is equally important because it fosters a sense of community within the nursing staff, thereby improving the likelihood of positive communication and interaction- which can also alleviate stress.

Healthcare facilities can actively promote peer support for nurses in a variety of ways. First, institutions are well placed to pair newly recruited nurses with experienced nurses, thereby creating an automatic peer support pairing system that can be useful for newly recruited nurses. Second, institutions can provide seminars or talks whereby the more experienced nurses discuss work related challenges and issues with junior nurses- thereby also providing a form of peer support. Third, healthcare facilities can actively encourage the creation of peer support groups within the hospital, where nurses can have the prerogative to join a peer support group that suits their personal expectations or preferences. Finally, healthcare institutions can seek input from the nurses on what kind of peer support system should be established, for instance by administering a survey among all nurses working at the institution.

The fourth strategy that nurses can use to cope with stressful environments is by relying on counselling or therapy services. Research has noted that on the one hand, nurses are expected to project a demeanour of confidence, assurance, professionalism and competence in order to carry out their expected duties and responsibilities (Aggar et al., 92). On the other hand, both the personal and professional environments can create feelings of insecurity, fear, and uncertainty, which nurses are typically encouraged to hide or manage privately (Aggar et al., 92). The implication is the possibility of a dissonance between the emotional realities that are publicly disclosed, versus those that are privately experienced. As such, counseling or therapy can be a useful and effective stress coping strategy as it would provide the space in which nurses can openly discuss feelings or thoughts that are antithetical to the professional demeanour they are expected to maintain. In counselling, nurses can discuss a range of factors with the assurance that this will not impact how they are treated or perceived at work.

Healthcare institutions can facilitate the counselling stress coping strategy by offering counselling services for nurses within the hospital. Ideally, the counselling services would be provided by an external third party, which would improve the utilization rates among the nurses. Second, healthcare institutions can offer financial support for nurses who prefer an external counsellor, for instance by shouldering some percentage of the cost, or providing medical benefits and/or insurance coverage. Third, healthcare institutions can provide support by addressing the stigma attached to nurses displaying ‘unprofessional’ emotions or thoughts. That is, it would be beneficial to create an environment where nurses feel supported to experience human feelings, emotions and realities, without necessarily framing that as harmful to their ability to perform the expected duties and responsibilities. This approach may be useful insofar as enabling the nurses to openly seek counselling or therapy to discuss negative emotions that can affect their performance and wellbeing.

The final strategy that nurses can use to cope with stress is by maintaining a work-life balance. Over the course of the pandemic, this was arguably difficult to establish as most nurses had to work overtime and dealt with the ramifications of the pandemic in both the professional and the personal contexts. However, the maintenance of a work-life balance is important in that it enables nurses to leave the work-related stressors in the workplace, such that they can focus on rest and relaxation in their home environments (Lee et al., 93). The work-life balance is equally important because it enables the nurse to establish clear boundaries that can be useful in maintaining a less stressful existence. Healthcare institutions can play a pivotal role in supporting nurses’ establishment of a work-life balance. First, institutions can respect the nurses’ home environment by refraining from calling them in to work or discussing work related issues while they are at home. Second, institutions can offer mandated ‘home’ days to the nurses, whereby the nurse is expected to take at least three weeks at home every six months. Third, institutions can facilitate work-life balance by encouraging nurses to limit work related communication (for instance emails, texts and notifications) while they are at home.

In conclusion, the purpose of this essay was to examine some of the stress coping strategies that can be used by nurses. The importance of stress coping management is pronounced within the nursing profession, as work related responsibilities and demands can often create stress triggers that can affect performance in addition to undermining physical, emotional and psychological wellness. The paper argued that nurses can manage stress through mindfulness; the maintenance of a healthy lifestyle; the establishment of communication channels with the upper hierarchy of the nursing staff; peer support; therapy and counselling; as well as maintaining a work-life balance. The paper also argued that healthcare institutions should play a leading role in providing resources and/or creating an environment that is conducive to the adoption of positive stress coping strategies within the nursing profession.

Works Cited
Aggar, Christina, et al. “The Impact of COVID‐19 pandemic‐related Stress Experienced by Australian Nurses.” International Journal of Mental Health Nursing, vol. 31, no. 1, 2022, pp. 91-103.

Barker, Linsey M., and Maury A. Nussbaum. “Fatigue, Performance and the Work Environment: A Survey of Registered Nurses.” Journal of Advanced Nursing, vol. 67, no. 6, 2011, pp. 1370-1382.

Echevarria, Ilia M., Barbara J. Patterson, and Anne Krouse. “Predictors of Transformational Leadership of Nurse Managers.” Journal of Nursing Management, vol. 25, no. 3, 2017, pp. 167-175.

Kül, Seval, and Betül Sönmez. “The Effect of Nurse Managers’ Servant Leadership on Nurses’ Innovative Behaviors and Job Performances.” Leadership & Organization Development Journal, vol. 42, no. 8, 2021, pp. 1168-1184.

Lee, Jong-Hyun, Jaejin Hwang, and Kyung-Sun Lee. “Job Satisfaction and Job-Related Stress among Nurses: The Moderating Effect of Mindfulness.” Work: Journal of Prevention, Assessment & Rehabilitation, vol. 62, no. 1, 2019, pp. 87-95.

Lee, Meng H., et al. “A Longitudinal Study of Nurses’ Work-Life Balance: A Case of a Regional Teaching Hospital in Taiwan.” Applied Research in Quality of Life, vol. 17, no. 1, 2022, pp. 93-108.

Li, H. -., et al. “The Effect of a peer‐mentoring Strategy on Student Nurse Stress Reduction in Clinical Practice.” International Nursing Review, vol. 58, no. 2, 2011, pp. 203-210.

López‐López, Isabel M., et al. “Prevalence of Burnout in Mental Health Nurses and Related Factors: A Systematic Review and meta‐analysis.” International Journal of Mental Health Nursing, vol. 28, no. 5, 2019, pp. 1032-1041.

Mahon, Marie A., et al. “Nurses’ Perceived Stress and Compassion Following a Mindfulness Meditation and Self Compassion Training.” Journal of Research in Nursing, vol. 22, no. 8, 2017, pp. 572-583.

Pelit‐Aksu, Sıdıka, et al. “Effect of Progressive Muscle Relaxation Exercise on Clinical Stress and Burnout in Student Nurse Interns.” Perspectives in Psychiatric Care, vol.57, 2020, pp.1095-1102.

Zhang, Meng, et al. “Influence of Perceived Stress and Workload on Work Engagement in front‐line Nurses during covid‐19 Pandemic.” Journal of Clinical Nursing, vol.30, 2021, pp.1584-1595.

Nurses’ Potential in Combating Eating Disorders Among Adolescents

Academic Discipline: Nursing
Assignment Subject: Nurses’ potential in combating eating disorders among adolescents
Academic Level: Undergraduate-2nd Year
Referencing Style: APA
Word Count: 2,003

Social media platforms have become increasingly important in the contemporary context, as they represent a highly efficient avenue for real time interaction and communication. Social media platforms are predominantly used by the younger demographic cohort, which has important implications for their psychological wellbeing and development (Barry et al., 2017). Notwithstanding the stated benefits of social media, the platforms are also associated with a pernicious effect among younger users. The focus of this analysis is on one of the more pernicious effects of social media- specifically the promulgation of messages that encourage the development of body image dysmorphia among adolescents. The importance of this topic is that body dysmorphia is arguably the leading cause for the development of eating disorders, which have a disproportionate impact on adolescent females compared to any other demographic group (Jarman et al., 2021).

This essay seeks to discuss the strategies that nursing professionals can use to prevent and/or address the underlying reasons for social-media-induced eating disorders that are prevalent among adolescents. To this end, the next section of the paper presents a brief overview of the research scope, specifically noting the relational dynamics between social media, body dysmorphia, and the development of eating disorders among adolescents. This is ensued by a discussion of the strategies that nursing practitioners can use to assist teenagers dealing with eating disorders. The final section concludes with a rehash of the main arguments and insights. It will be argued that nursing practitioners can assist teenagers with body dysmorphia by leveraging their professional networks; disseminating information about the consequences of body dysmorphia and eating disorders; empowering teenagers to become more comfortable with their natural body types; enabling teenagers to become cognizant of the detrimental effects of both body image dysmorphia and eating disorders; as well as collaborating with other professionals in the healthcare sectors, such as dieticians and counsellors, to provide a viable action plan for teenagers experiencing eating disorders and body image dysmorphia.

Contextual Overview: Social Media, Body Dysmorphia and Eating Disorders
Social media can be paralleled to most forms of popular culture because of its capacity to expose users to a range of lifestyle choices, patterns and behaviours that can influence perceptions about self, others and the environment (Jarman et al., 2021). Adolescent social media users often follow celebrities, social media influencers, and other pseudo-celebrity users who often affect how they perceive themselves. The beauty ideals that are typically parroted and promoted on social media platforms are consistent with the thin body type (specifically for women), which is not solely unique to social media but has also been noted for other, Western popular culture mediums (Danthinne et al., 2022; Milkie, 1999). It is therefore commonplace among adolescents to compare themselves with the beauty images, ideals and standards they observe on social media, which can inadvertently lead to the development of body image dysmorphia.

Body image dysmorphia can be defined as feelings of intense dissatisfaction with one’s physical appearance. In the contemporary context, social media has been highlighted as one of the most significant exposure platforms that lead to the development of body image dysmorphia among adolescents (Danthinne et al., 2022). This is because social media is perceived as an accurate projection of the prevalent social norms for beauty, influencing younger users to adhere to the expected beauty standards in order to fit in (Fardouly & Holland, 2018). As noted earlier, the dominant beauty ideal is that of a thin body type, which alienates girls and women who may not naturally fit with that body type.

One of the most detrimental effects of body image dysmorphia is the development of eating disorders (Marks et al., 2020). This is especially applicable to adolescents who may be overweight or obese, such that they are pressured to reduce their body weight drastically in order to fit within the parameters of Western beauty ideals. Consequently, a substantial number of teenage social media users develop conditions such as bulimia and Anorexia Nervosa, which have severe physical, emotional, and psychological health implications (Marks et al., 2020). According to recent research, an estimated 2.7% of teenagers in the United States have an eating disorder (including Bulimia and Anorexia Nervosa) (Polaris Teen Centre, 2018). Moreover, eating disorders can be associated with other comorbidities that can increase the proclivity towards suicidal ideation, among other psychological issues (Marks et al., 2020). The reality that adolescent social media users tend to develop body image dysmorphia, and subsequently eating disorders, highlights the significance of this research topic.

Typical Strategies to Mitigate Against Eating Disorders
According to existing research, eating disorders are typically perceived as psychological problems (Goodyear, 2020; Marks et al., 2020). Consequently, the most cited approach to address challenges of body image dysmorphia and eating disorders is contingent on psychological intervention. Specifically, teenagers experiencing eating disorders are often encouraged to participate in cognitive behavioural training or therapy (CBT), which aims to address the problematic thought patterns that justify the development of the eating disorder (Yasar et al., 2019). Other scholars have noted that eating disorders emerge because of the exposure to popular culture mediums that promote the Western beauty ideal, intimating that this can be redressed by including other body types and encouraging popular culture to refrain from shaming women who do not subscribe to the Western ideals (Milkie, 1999).

Regardless of the attribution of eating disorders to either underlying psychological issues or the disproportionate exposure to Western beauty ideals, the nursing profession is often exempted among practitioners who can combat eating disorders among adolescents. That is, nurses are not perceived as critical actors who can intervene against the development of eating disorders among adolescents. The rest of this paper seeks to challenge this assumption, by demonstrating the ways in which nursing practitioners can intervene to challenge both body image dysmorphia and eating disorders among teenagers. The utility of this discussion is improving the conceptualization of eating disorders, by highlighting the potential value of integrating nurses within potential interventionists.

Nursing Professional Interventions
Nursing practitioners are often the first point of contact with the public, specifically in the context of primary healthcare facilities. Moreover, most schools are now mandated to have some healthcare professionals on their premises, which further explains the importance of nurses as potential interveners against body-dysmorphia-induced eating disorders. In addition, nursing professionals can collaborate with other healthcare professionals in devising a viable intervention for adolescents with eating disorders, which further highlights their potential importance in alleviating some of the negative consequences of social media exposure among adolescents.

The first strategy that nursing professionals can use is to thoroughly understand the development of the eating disorder- necessarily from the vantage point of the teenager. That is, one of the best nursing practices in the contemporary context is the ability to practice through a patient-centric approach, that perceives the patient as capable of relaying their challenges and issues they are currently facing (Lindgren et al., 2020). In either the primary care or school setting, the nursing practitioner can obtain a detailed patient history that notes when and why the eating disorder emerged, and the reasons the adolescent has struggled to cope with body image dysmorphia. A patient-centric approach would be useful because it empowers the patient to understand their own reality, which will eventually lead to the development of a sense of ownership that can be used to address the problematic behaviour (Lindgren et al., 2020).

Second, nursing practitioners can encourage adolescents to maintain a reflective journal documenting their thoughts, feelings and emotions in the aftermath of spending time on social media platforms. This would be especially useful in a school environment and would enable the adolescents to be cognizant of how social media exposure can lead to the development of body image dysmorphia. The second rationale for this strategy is that it would provide an opportunity to highlight negative thoughts and comparison tendencies- thereby redressing unhealthy psychological processes that also lead to the development of body image dysmorphia in the aftermath of social media exposure (Yasar et al., 2019). Consequently, teenagers would be better placed to find positive coping strategies that do not further jeopardize their health outcomes.

The third approach that nurses can use to combat eating disorders among teenagers is through the dissemination of information about eating disorders, focusing on their immediate and long-term health consequences. Although social media can play a role in the development of eating disorders, it does not possess resources that can be effectively used to discourage users from developing eating disorders (Milkie, 1999). As credible healthcare professionals, nurses would have accurate and evidence-based knowledge on the impact of eating disorders on adolescents. They can therefore disseminate information to ensure that teenagers are privy to the effects of eating disorders, to discourage them from applying an extreme reaction to body image dysmorphia.

The fourth strategy that nursing practitioners can use is to expose adolescents to ‘successful’ role models who may not necessarily resonate with the Western beauty ideal of a thin body type. The rationale for this approach is that it would demonstrate that body image is not a prerequisite for social success or acceptance- which some adolescents may be unaware or sceptical of (Goodyear, 2020). Nursing professionals would be useful in designing comprehensive programs that can eventually be integrated within healthcare policy, to mitigate against eating disorders among adolescents. Nurses may be well placed to actively participate in this approach because they meet a high volume of patients and professionals through their work. Moreover, this recommendation would be consistent with the best nursing practice of advocacy for the patient (Lindgren, 2020). That is, the nurse would be advocating for the adolescents by leveraging their professional networks in order to provide adolescents with alternative conceptualizations of beauty, success, social acceptance and desirability.

Another strategy that nurses can use to assist teenagers with eating disorders is to collaborate with other healthcare professionals- for instance dieticians and therapists. Dieticians would be useful insofar as generating an appropriate plan that is tailored to the specific goals of the adolescent, while maintaining nutritional balance that would reduce the possibility of developing eating disorders. Similarly, therapists would be helpful in providing counselling services that can assist the teenager to understand the roots of the body dysmorphia and eating disorder, which would then facilitate the development of a mitigation strategy. The nursing professionals can play an instrumental role in bringing together different healthcare professions to combat this issue, thereby highlighting the potential utility of nurses in assisting teenagers with body image dysmorphia.

In conclusion, the purpose of this analysis was to evaluate the extent to which the nursing profession can be actively engaged in combating eating disorders among adolescents. The first section of analysis demonstrated that social media use is often problematic, especially among teenage girls, as they tend to develop body dysmorphia because of the exposure to Western beauty ideals that revolve around the thin body type. It was also noted that most strategies on combating body dysmorphia and eating disorders focus on involving psychological healthcare practitioners- often at the expense of nurses. The paper argued that contrary to conventional wisdom, nurses can play an important role in mitigating against the development of eating disorders among teenagers. The necessary qualification that contextualizes the arguments herein is the active engagement of nurses in learning environments, as well as their prioritization as the first point of contact in primary healthcare facilities. The paper thus argued that nurses can adopt strategies that are consistent with best practices- for instance adopting a patient centric approach to care delivery, empowering the patients across all stages from diagnosis to treatment, and pioneering collaboration within the healthcare profession to assist teenagers with eating disorders. Moreover, it was argued that nurses can be useful in disseminating evidence-based and practical information; challenging the conventional Western beauty ideal of the thin body type; encouraging teenagers to actively think about the subliminal messages conveyed through social media platforms; and creating positive dietary and exercise regimes that are compatible with the needs, expectations or desires of teenagers experiencing body dysmorphia and subsequently developing eating disorders.

References
Barry, C. T., Sidoti, C. L., Briggs, S. M., Reiter, S. R., & Lindsey, R. A. (2017). Adolescent social media use and mental health from adolescent and parent perspectives. Journal of Adolescence, 61, 1-11.

Danthinne, E. S., Giorgianni, F. E., Ando, K., & Rodgers, R. F. (2022). Real beauty: Effects of a body‐positive video on body image and capacity to mitigate exposure to social media images. British Journal of Health Psychology, 27(2), 320-337.

Fardouly, J., & Holland, E. (2018). Social media is not real life: The effect of attaching disclaimer-type labels to idealized social media images on women’s body image and mood. New Media & Society, 20(11), 4311-4328.

Goodyear, V. (2020). Narrative matters: Young people, social media and body image. Child and Adolescent Mental Health, 25(1), 48-50.

Jarman, H. K., Marques, M. D., McLean, S. A., Slater, A., & Paxton, S. J. (2021). Motivations for social media use: Associations with social media engagement and body satisfaction and well-being among adolescents. Journal of Youth and Adolescence, 50(12), 2279-2293.

Lindgren, B., Molin, J., & Graneheim, U. H. (2020). Balancing between a person-centred and a common staff approach: Nursing staff’s experiences of good nursing practice for patients who self-harm. Issues in Mental Health Nursing, 42(6), 564-572.

Marks, R. J., De Foe, A., & Collett, J. (2020). The pursuit of wellness: Social media, body image and eating disorders. Children and Youth Services Review, 119, 1-8.

Milkie, M. A. (1999). Social comparisons, reflected appraisals, and mass media: The impact of pervasive beauty images on black and white girls’ self-concepts. Social Psychology Quarterly, 62(2), 190-210.

Polaris Teen Centre. (2018). 10 Statistics of Teenage Eating Disorders. Retrieved from https://polaristeen.com/articles/10-statistics-of-teenage-eating-disorders/

Yaşar, A. B., Abamor, A. E., Usta, F. D., Taycan, S. E., & Kaya, B. (2019). Two cases with avoidant/restrictive food intake disorder (ARFID): Effectiveness of EMDR and CBT combination on eating disorders (ED). Klinik Psikiyatri Dergisi: The Journal of Clinical Psychiatry, 22(4), 493-500.

Information boom or information crisis: The impact of digitization on news

Academic Discipline: Media and Communications
Course Name: Media and Communications
Assignment Subject: Information boom or information crisis: the impact of digitization on news credibility
Academic Level: Undergraduate-2nd Year
Referencing Style: MLA
Word Count: 1,898

With the rise of Web 2.0 – meaning interactive Web where the content is user-generated (Buhler et al. 215) – there has been a modification in the production and distribution of information, and thus in the overabundance (Anderson 52) and diversity (Buschow and Suhr 385) of information available. The positive outcome of this is that society now has access to more information sources, in comparison to the previously highly hierarchical information (Schapals, Maares and Hanusch 22) produced by a small and exclusive group of actors in the press industry (Hermida and Young 96). At the same time, the subsequent shift towards an abundance of information that was accessible to a much larger pool of contributors (Buhler et al. 218) has resulted in some negative outcomes, including information that is opinionated, contestable, hard to verify (Fisher 21), and in some cases purposely manipulative (Heinrich 179). In this paper, it will be argued that the digitization of news has ushered in a remarkable means of mobilization and action, but also a significant contribution to news and information dissemination to counterbalance the traditional mass media.

The digitization of news
In order to understand the reality of the processes of production and circulation of information, the material conditions that organize them need to be considered. This refers to the major distribution channels that direct the circulation of news (Parcu 103) and the main players taking part in the international flow of information (Anderson 53). Taking into account, the socio-economic conditions weighing on the production and dissemination of information (Hermida and Young 94) makes it possible to qualify the discourses celebrating the advent of an era of diversity (Buschow and Suhr 385) through news digitization.

From its inception, the Web became a medium that did not have to submit to the control that the major media outlets were subjected to (Schapals, Maares and Hanusch 23) over the production and circulation of information. This was, initially, a means of ensuring a greater range of information inputs (Heinrich 180). The Internet was seen as having the capacity to circulate information across borders, having unprecedented speed and reach (Hermida and Young 93), while also being largely cost-effective. These new contributors of information would play a part in extending the field of news, including eyewitness accounts and investigative journalism (Monahan and Ettinger 490) not commissioned or funded by large media corporations. As a result, independent media centers sprung up to produce both factual and valuable information (Schapals, Maares and Hanusch 24), but also conspiracy theories (Fisher 36) and counter-information that served to undermine the dominant media corporations (Buschow and Suhr 392). The materiality of the flows of information produced (Buhler et al. 217) attested to their ability to reach mass audiences.

Information boom
The development of Web 2.0 has unsettled the classic information economy (Schapals, Maares and Hanusch 27) by giving rise to a networked economy (Parcu 104), as the latter enjoys the freedom from limitations of the controlled production and dissemination of news (Heinrich 183). Firstly, as aforementioned, the reduction in costs has resulted in an unprecedented increase in the number of actors (Buschow and Suhr 386) who had taken on the task of interpreting and disseminating information, and secondly, it boosted the “diversity of perspectives” (Hyvonen 122), in both qualitative and quantitative terms.

With the development of the Web 2.0, ordinary citizens, most with no experience in journalism, branded themselves international correspondents (Hermida and Young 98) if they found themselves in the right place and at the right time, mostly aimed at creating the so-called ‘infotainment’ (Schapals, Maares and Hanusch 21). Their aim was to garner bigger audiences by creating engaging content that was informative, but also largely entertaining, and aimed at drawing a critical mass. These newly constructed accounts created on popular social media platforms, namely YouTube and Facebook, and subsequently Instagram, Snapchat, and TikTok (Prerez-Escoda et al. 34), were competing both against one another, and the mainstream media outlets (Hyvonen 123) for their share of viewership. On the one hand, they had the advantage over the mainstream media outlets as their viewership and engagement numbers (Buhler et al. 223) were clearly visible, as this quickly drew in the advertisers (Anderson 59). On the other hand, unless they fully disclosed their sources, their accounts were potentially unverifiable (Fisher 22), and viewers still tended to trust the reports from the mainstream outlets. The content creators – which was the subsequently coined title for such contributors (Prerez-Escoda et al. 28) – did not have the same material and bureaucratic constraints that the mainstream media outlets tend to have (Schapals, Maares and Hanusch 22), and as such, felt empowered to freely tamper with the complex dynamics of information exchange (Heinrich 181) in the digital age.

Information crisis
The advent of a new era of news circulation attests to the permanence of the imbalances in the processing of information (Parcu 105). With the Internet, blogs, and especially social media platforms, each user works on building an audience. The Web is in a state of continuous flux, with information added, subtracted, edited, viral, and non-viral, and producing an abundance of links to other sites or other sources (Monahan and Ettinger 486). New intermediaries which have emerged on the Web constitute new gatekeepers of the process of information circulation (Dvorkin 55), as they challenge the effectiveness of the contribution of the traditional media outlets regarding the coverage of international news. The amateur production of information on the Web (Schapals, Maares and Hanusch 20) has affected the overall circulation of news by boosting competitiveness, changing the marketing and advertising forces (Anderson 59), thus effectively creating the New World Information Order (Zeller, Trakman and Walters 46). In such a system, the economic model of news content is the maximization of viewership.

Media ownership and the influence of political powers over the media (Hamida and Young 98) are the two main reasons that the traditional media has affected news dissemination and contributed to an air of distrust in the production of information (Perez et al. 26). It is believed that some mainstream media are a powerful propaganda instrument in the service of political powers that forge significant advantageous positions at the international level (Zeller, Trakman and Walters 60) and influence international public opinion. In democratic countries, the influence of political and economic forces on the media (Dvorkin 67) may not be as obvious as in some regimes which are perceived as authoritarian, and where the interference of the public authorities (Schapals, Maares and Hanusch 20) in the work of journalists and the control information (Dvorkin 16) disseminated by the mass media (particularly those owned by the state) is very apparent.

The control exercised by political forces over the media also has a great influence on the circulation of information in periods of conflict or crisis. For these reasons, the emergence of digital social media is seen as an opportunity to redress this balance. Namely, through the use of social media, users hope to undermine the power (Hermida and Young 99) to control the circulation and dissemination of information by the public authorities. For example, while state media, as a deliberate tactic, does not cover certain topics in conflict times (Zeller, Trakman and Walters 59), social media and public blogs and forums try to fill these gaps in the news coverage (Monahan and Ettinger 482). They tend to be based on eyewitness accounts, but also unbridled and unreliable sources of information, and, frequently, conspiracy theories (Fisher 36).

Numerous non-governmental organizations are working to ensure the freedom of information (Heinrich 180), and highlight the need for citizens and communities within public or private associations, to continue producing, processing, and communicating information. In the age of the Internet, driven by the hope of widening the circle of information producers (Buschow and Suhr 384), this works to transform the passive relationship to information into an interactive production. The alternative news sources have thus experienced a major transformation with the emergence of the Internet, and the digitization of news on online social media platforms (Parcu 103).

As a result of the digitization of news, online social media are also seen by some as the repositories of disinformation, which are contributing to the information crisis, in that they have the capacity to be disturbing propaganda tools (Heinrich 177). For example, in addition to the proliferation of content that has been doctored or taken out of context, there is the issue of identifying and preventing the dissemination of all the potentially engaging posts which tend to take off quickly and become viral, as the users of other social media platforms share them without making any checks with regard to their authenticity (Dvorkin 56).

Future prospects
The argument of whether the impact of digitization on news has resulted in an information boom or information crisis is centred on the way people access news, and how they consume news. For example, the socio-digital networks – which now play a major role in the way people access and source information (Buschow and Suhr 390) – seem to have the technical capacity to draw in significant viewership. At the very least, they provide access to more diverse visions and opinions (Parcu 107), but with the growing awareness of digital literacy (Buhler et al. 214), online users are seeking verifiable sources of information in order to gain an understanding of the subject that is rooted in facts. The proliferation of unverifiable sources and ‘infotainment’ has been criticized as it has an immense influence on the way people perceive information (Perez-Escoda et al. 31), and the way in which it shapes their view of the world (Zeller, Trakman and Walters 51) even if they aren’t aware of it.

On the one hand, ‘infotainment’ and content created with the aim to reach a critical mass on social networks appears to have encouraged people to take a greater interest in news and social and political issues (Hyvonen 126) in a way that is more engaging than traditional news reports in the press (Fisher 31). On the other hand, media and communication analysts have questioned the capacity of the content creators to bring more to this field to balance the traditional media reports (Dvorkin 82) by highlighting the way in which certain players have gradually established themselves as essential mediators (Parcu 92) between content produced by amateurs and the mainstream media.

Concluding remarks
The control of information, or rather, the authentication and verifiability of accounts (Buhler et al. 214), both by independent users and professionals or public figures remains a major issue, even more so in the digital era. The traditional mainstream media, which for a long time had a monopoly on information (Dvorkin 80), play a major role during these times, and their control becomes an important issue for the political and economic powers (Zeller, Trakman and Walters 60). The rise of social media and the digitization of news has contributed to reducing the power of the mass media over the control and circulation of information. By facilitating the expression of citizens, social media have contributed to the public’s awareness of social and political issues and their interest in understanding the causes, effects, and consequences. Nevertheless, these new media platforms are also sources of misinformation and propaganda (Heinrich 177), which is developing more and more as the competition mounts between the major corporate media and online users.

Referencing Style: MLA
Word Count: 1898
Sources: 13

Works Cited
Anderson, C. W. “The State (s) of Things. 20 Years of Journalism Studies and Political Communication.” Political Communication 21.1 (2020): 47-62.

Buhler, Julian, et al. “Developing a Model to Measure Fake News Detection Literacy of Social Media Users.” Disinformation, Misinformation, and Fake News in Social Media. Springer (2020): 213-227.

Buschow, Christopher, and Maike Suhr. “Change management and new organizational forms of content creation.” Media and change management. Springer (2022): 381-397.

Dvorkin, Jeffrey. Trusting the News in a Digital Age: Toward a” New” News Literacy. John Wiley & Sons, 2021.

Fisher, Caroline. “What is meant by ‘trust’ in news media?.” Trust in media and journalism, edited by Kim Otto and Andreas Kohler. Springer (2018): 19-38.

Heinrich, Ansgard. “How to build resilient news infrastructures? Reflections on information provision in times of “Fake News”.” Resilience and Hybrid Threats. IOS Press (2019): 174-187.

Hermida, Alfred, and Mary Lynn Young. “From peripheral to integral? A digital-born journalism not for profit in a time of crises.” Media and Communication 7.4 (2019): 92-102.

Hyvonen, Mats. “As a matter of fact: journalism and scholarship in the post-truth era.” Post-Truth, Fake News. Springer (2018): 121-132.

Monahan, Brian, and Matthew Ettinger. “News media and disasters: Navigating old challenges and new opportunities in the digital age.” Handbook of disaster research. Springer (2018): 479-495.

Parcu, Pierluigi. “New digital threats to media pluralism in the information age.” Competition and Regulation in Network Industries 21.2 (2020): 91-109.

Perez-Escoda, Ana, et al. “Fake news reaching young people on social networks: Distrust challenging media literacy.” Publications 9.2 (2021): 24-40.

Schapals, Aljosha Karim, Phoebe Maares, and Folker Hanusch. “Working on the margins: Comparative perspectives on the roles and motivations of peripheral actors in journalism.” Media and Communication 7.4 (2019): 19-30.

Zeller, Bruno, Leon Trakman, and Robert Walters. “The Internet of Things-The Internet of Things or of Human Objects-Mechanizing the New Social Order.” Rutgers L. Rec. 47 (2019): 15-118.

The effectiveness of the COVID-19 temporary solutions in waste management

Academic Discipline: Urban Geography
Course Name: Urban Geography
Assignment Subject: The effectiveness of COVID-19 temporary solutions in waste management, urban mobility and pollution
Academic Level: Undergraduate-3rd Year
Referencing Style: Chicago
Word Count: 1,885

Medical waste tends to incur a host of microorganisms that can infect hospital patients, healthcare workers, and the general public. In a public health crisis, infectious risks include the release into the environment of contaminant microorganisms (Meng et al. 2021, 226) as patients are treated in the hospital for COVID-19. Additionally, not segregating and not recycling waste is one of the causes which has led to environmental degradation in terms of pollution and the increase of carbon dioxide (CO2) emissions from decomposing waste.

As a result of the urgency of the pandemic, most countries were unprepared to tackle the mixed contaminated medical waste from COVID-19. This was a warning sign that the basic infrastructure and capacity to effectively and safely deal with medical waste, should be widely communicated (Battisti 2021, 2) in accordance with the requirements of relevant multilateral environmental agreements (Liang et al. 2021, 9). Interim solutions, such as segregating waste and using locally manufactured incinerators, were implemented to meet short-term needs for COVID-19 waste treatment. The aim was, firstly, to prevent the transmission of the virus (Kumar et al. 2021, 449), and secondly, to ensure the non-accumulation of surplus waste. In this paper, it will be discussed how the immediacy of the public health crisis required a radical paradigm shift towards solutions which serve to protect the environment (Garnett et al. 2022, 6) from the waste newly generated since March 2020, in a world that has already been struggling (Ghernaout and Elboughdiri 2021, 5) with waste management and waste pollution.

Waste management issue
The health risks as a result of direct contact with COVID-19-contaminated waste include potential negative impacts on human health if it is not disposed of properly, such as through the contamination of water and soil during waste treatment (Kumar et al. 2021, 449) and by air pollution, given that the virus particles are airborne (Meng et al. 2021, 227). When dumped into the natural environment or at public landfills, medical waste can lead to bacteriological or toxic contamination of the surroundings (Battisti 2021, 3), which can leak to publicly used resources. This concerns all the waste generated by the operation of a healthcare establishments (Garnett et al. 2022, 5), both at the level of its hospitalization and care services, and at the level of the medical, technical, and administrative services (Roy et al. 2021, 36). Namely, medical waste includes all waste generated in healthcare establishments (including clinics, medical and dental surgeries, establishments for the disabled and for the elderly), and by healthcare workers (on the premises and homecare), research centers, and laboratories related to medical procedures (Ghernaout and Elboughdiri 2021, 3) such as the research and production that went into creating the COVID-19 vaccines (Lazarus et al. 2022, 2). Responsible disposal of unused, expired or contaminated vaccines also had to be considered, especially as the number of doses that was purchased by countries as a precaution (Schaefer, Leland and Emanuel 2021, 903) could not subsequently be donated (Lazarus et al. 2022, 1).

As COVID-19 is transmitted by close contact with people carrying the virus, including asymptomatic patients, hospitals, healthcare facilities, and the general public produced more medical waste (Garnett et al. 2022, 2) from PPE, antibacterial products, and surface disinfectants. There was also a sharp increase in the amount of single-use plastics (Meng et al. 2022, 224) due to the fear of cross-contamination. The issue of this newly generated waste during the pandemic has taken on variable dimensions. Firstly, the impact is associated with the quantity generated (Battisti 2021, 2), and secondly, it concerns the importance of mitigating risks (Kumar et al. 2021, 452) for human health and the environment.

Temporary solutions proposed and implemented
Effective solutions for the control of waste throughout the pandemic included maximizing the use of available waste management and, at the same time, seeking to avoid any potential long-term impacts on the environment. To do this, medical and healthcare establishments looked into solutions to manage the increase in waste generation by maximizing the use of existing facilities (Rubab et al. 2022, 218). For example, establishments tested the immediate implementation of systems of safe disposal of used medical equipment, especially the segregation, collection, and management of medical waste (Roy et al. 2021, 35) according to the level of risk of cross-contamination.

In order to assess the environmental risks and potential solutions in this case, the spread of the virus particles outside of the healthcare establishments also had to be considered, such as the potential spread of disease and chemical contaminants in high-density urban areas or near abundant marine environments (Battisti 2021, 2). As aforementioned, the deposit of medical care waste in uncontrolled areas can have a direct environmental effect through the contamination of natural resources. Namely, unauthorized and improper treatment or incineration, or dumping of medical waste tends to pollute the resources (Liang et al. 2021, 9) and spread further in the ecosystem. As such, the solutions which were implemented to tackle this temporary but overwhelming surge of medical waste were based on the ‘polluter pays’ principle’ (Roy et al. 2021, 37), the precautionary principle’ (de Sousa 2021, 670), and/or the principle of proximity (Herfien et al. 2021, 39).

The ‘polluter pays’ principle requires all waste producers to be legally and financially responsible for the disposal of their waste, in a safe manner and without negatively impacting the environment (Roy et al. 2021, 37). Ensuring that waste disposal does not affect the environment is the responsibility (de Sousa 2021, 672) of each healthcare or medical facility that generates medical waste. On the other hand, under the precautionary principle, the potential risks of the medical waste generated during a health crisis are considered, which has the effect of obliging medical facilities (Karamanian 2021, 91) which generate considerate amounts of waste to apply high standards (Herfient et al. 2021, 40) for the collection and disposal of waste, to provide thorough training in safety and hygiene to all the employees that use PPE and other medical equipment (Rubab et al. 2022, 218), and to communicate the importance of responsible waste management (Kumar et al. 2021, 453) to ensure public safety and environmental protection. In turn, the principle of proximity refers to the treatment and disposal of hazardous biomedical waste on account of it being contaminative (Meng et al. 2021, 228). This means that the management of this waste has to minimize the risks for the population (Hefien et al. 2021, 51) and ensure that the spread of the virus is mitigated. For COVID-19-contaminated waste, this meant disposing of PPE and other medically used items locking in and storing waste (Herfien et al. 2021, 51) so that the virus does not permeate, and sending it off to a site which could responsibly disinfect or destroy this type of waste. An extension of this principle is that each country takes steps to properly dispose of all waste within its own borders (Liang et al. 2021, 13) in order to stop the spread of the virus and not minimize additional transportation emissions.

Analysis of effectiveness
To meet the urgent needs to protect the public in the face of the pandemic, the amount of PPE, additional medical treatment materials, and billion doses of vaccine administered worldwide produced thousands of additional tonnes of waste (Lazarus et al. 2022, 2). Subsequently, as a result of reports issued by environmental agencies, which stated that single-use masks tend to end up in the oceans as waste (Battisti 2021, 1), the most effective solution was to encourage and clearly communicate guidance on how to reduce waste overall in a safe manner, such as through the use of non-disposable PPE and their proper and frequent disinfection and treatment after use (de Sousa 2021, 672). One solution that was financed and executed in India, Italy, France, among others, was a wide initiative to recycle single-use masks by identifying and supporting collection and recycling as a contribution to the circular economy (Roy et al. 2021, 41).

Additionally, one of the temporary solutions which has shown to be the most effective was mapping the sources of waste generation to identify the changes in waste quantities (Garnett et al. 2022, 8) practiced in Canada, China, and the UK. The aim was to maximize the use of resources and the efficiency of waste collection and treatment. This included re-directing services (Herfien et al. 2021, 51) from the locations where waste generation has been reduced as a result of stay-at-home and physical distancing directives – such as schools, offices, shopping malls, and other public places – to the ones which were shown to generate more waste (de Sousa 2021, 674) affected by COVID-19 – clinics, hospitals, other medical establishments, home care centres, research and testing laboratories, quarantine centres, etc.

Evidently, the solutions which showed to be effective were human and financial resources, as well as assets intended for waste collection, allocated based on the need to address waste generation points (Garnett et al. 2022, 9), such as increased waste collection services, but also their responsible disposal and treatment (Karamanian 2021, 91), ensuring they do not end up in landfills and oceans. This included the temporary on-site storage and/or thermal treatment (Herfien et al. 2021, 50) of potentially contaminant waste, and adequate and safe sanitation measures (Meng et al. 2021, 227). A solution that was practiced in South Korea and the United States concerned items intended for multi-material recovery (Garnett et al. 2022, 5) stored temporarily on site for 72 hours, which is the known survival time of the virus particles (Liang et al. 2021, 13). After that, the collected materials were deemed to be safer to be handled, treated, and recycled.

Potentials for long-term solutions
From the start of the pandemic at the beginning of 2020, numerous organizations mobilized to provide the masks and other personal protective equipment (PPE) which were proven to be effective to protect the public and mitigate the spread of the virus (Karamanian 2021, 90). In the wake of this, countries looked for ways to finance businesses which would produce the PPE nationally (Ghernaout and Elboughdiri 2021, 4), as opposed to waiting for overseas shipments which were delayed due to COVID-19 restrictions, and also struggled to keep up with a sudden and overwhelming demand.

Considering long-term prospects, the proposed solutions have to make it possible to set up effective local production and collection circuits (Roy et al. 2021, 41) for used single-use personal protective masks/equipment (manufacturing, distribution, collection, and safe transportation) from the medical and healthcare establishments, the general public, businesses, and so on. This would involve the collection points or drop-off terminals complying with health standards (Schaefer, Leland and Emanuel 2021, 904), and the sorting and treatment of waste for reuse or recycling (Meng et al. 2021, 229). The solutions as such must offer equipment for the integration into processes of material reuse (Garnett et al. 2022, 10) in order to prevent their accumulation in the landfills and oceans. The solutions should also ideally include disinfecting technologies for the safe reuse of PPE as part of a wide manufacturing process.

Conclusion
As a result of stay-at-home orders and the physical distancing measures, the disruption of basic urban services, including the collection, segregation, treatment, and disposal of waste, essential for hygiene and public health, posed new challenges (Kumar et al. 2021, 453) and required new and effective solutions. As a solution for properly disposing of waste collected from households and medical establishments that deal with COVID-19, municipalities had to boost their waste management services. Temporary changes to waste management operations had to be executed using existing resources, and by finding quick-impact solutions to maintain continuity and efficiency of operations (de Sousa 2021, 676). This put a strain on waste management operations and the beneficiaries of these services. Best practices and recommended guidelines on the treatment of waste included adapting municipal waste management to remedy this situation and help decision-makers develop a solid waste management strategy in response to COVID-19.

Bibliography
Battisti, Corrado. “Not only jackals in the cities and dolphins in the harbours: less optimism and more systems thinking is needed to understand the long-term effects of the COVID-19 lockdown.” Biodiversity (2021): 1-5.

de Sousa, Fabiula Danielli Bastos. “Management of plastic waste: A bibliometric mapping and analysis.” Waste Management & Research 39, no. 5 (2021): 664-678.

Garnett, Emma, Angeliki Balayannis, Steve Hinchliffe, Thom Davies, Toni Gladding, and Phillip Nicholson. “The work of waste during COVID-19: logics of public, environmental, and occupational health.” Critical Public Health (2022): 1-11.

Ghernaout, Djamel, and Noureddine Elboughdiri. “Plastic waste pollution worsen by the COVID-19 pandemic: Substitutional technologies transforming plastic waste to value added products.” Open Access Library Journal 8, no. 7 (2021): 1-12.

Herfien, Herfien, Akbarulla Akbarulla, Andri Wijaya, Benni Mapanta, Farid Martadinata, Gunansyah Gunansyah, Kurniati Kurniati et al. “Medical Waste In The COVID-19 Pandemic Era: Management Solutions.” Science and Environmental Journal for Postgraduate 4, no. 1 (2021): 36-53.

Karamanian, Susan L. “The precautionary principle.” In Elgar Encyclopedia of Environmental Law. London: Edward Elgar Publishing Limited (2021): 90-94.

Kumar, Amit, Vartika Jain, Ankit Deovanshi, Ayush Lepcha, Chandan Das, Kuldeep Bauddh, and Sudhakar Srivastava. “Environmental impact of COVID-19 pandemic: more negatives than positives.” Environmental Sustainability 4, no. 3 (2021): 447-454.

Lazarus, Jeffrey V., Salim S. Abdool Karim, Lena van Selm, Jason Doran, Carolina Batista, Yanis Ben Amor, Margaret Hellard, Booyuel Kim, Christopher J. Kopka, and Prashant Yadav. “COVID-19 vaccine wastage in the midst of vaccine inequity: causes, types and practical steps.” BMJ Global Health 7, no. 4 (2022): 1-5.

Liang, Yangyang, Qingbin Song, Naiqi Wu, Jinhui Li, Yuan Zhong, and Wenlei Zeng. “Repercussions of COVID-19 pandemic on solid waste generation and management strategies.” Frontiers of Environmental Science & Engineering 15, no. 6 (2021): 1-18.

Meng, Jian, Qun Zhang, Yifan Zheng, Guimei He, and Huahong Shi. “Plastic waste as the potential carriers of pathogens.” Current Opinion in Food Science 41 (2021): 224-230.

Roy, Poritosh, Amar K. Mohanty, Alexis Wagner, Shayan Sharif, Hamdy Khalil, and Manjusri Misra. “Impacts of COVID-19 outbreak on the municipal solid waste management: Now and beyond thepandemic.” ACS Environmental Au 1, no. 1 (2021): 32-45.

Rubab, Saddaf, Malik M. Khan, Fahim Uddin, Yawar Abbas Bangash, and Syed Ali Ammar Taqvi. “A Study on AI‐based Waste Management Strategies for the COVID‐19 Pandemic.” ChemBioEng Reviews 9, no. 2 (2022): 212-226.

Schaefer, G. Owen, R. J. Leland, and Ezekiel J. Emanuel. “Making vaccines available to other countries before offering domestic booster vaccinations.” JAMA 326, no. 10 (2021): 903-904.

Can Environmental Responsibility be Taught?

Academic Discipline: Education/Sociology
Course Name: Education/Sociology
Assignment Subject: Can environmental responsibility be taught?
Academic Level: Undergraduate-3rd Year
Referencing Style: APA
Word Count: 1,864

Teachers in natural sciences, politics, history, social studies, as well as the arts and literature are already debating the degree to which education curricula should be oriented toward social transformation (Lehtonen, Salonen, & Cantell, 2019), and to what extent is it their responsibility to impart not only knowledge and facts, but also values (Dewi & Primayana, 2019). The currents of research associated with environmental education, in particular the possibility of integrating curricula with the aim of teaching environmental responsibility (Aarnio-Linnanvuori, 2019), includes environmental ethics, citizenship education, and education for sustainable development. In this paper, it will be argued that, while these currents may appear to have different objectives and values, they have to be combined in order to contribute to the construction of an eco-citizenship identity in a way that will develop the emancipation, responsibility, and commitment (Levy, Oliveira, & Harris, 2021) of the future generation of eco citizens.

Definition and scope
Teaching environmental responsibility refers to advocating specific initiatives such as reducing direct and indirect consumption, reducing and recycling waste, minimizing the frequency and quantity of the polluting modes of transport, solidarity and environmental justice (Alexander et al., 2021), and the political, economic, and cultural links to the environment (Lehtonen, Salonen, & Cantell, 2019). Teaching environmental responsibility as such requires implementing specific teaching methods (Aarnio-Linnanvuori, 2019), which engage the subjects, in particular by inviting them to challenge the social, economic, and political systems, and question their values (Olsen et al., 2020). On the one hand, it is suggested that this should be independent of school disciplines (for example, by parents and the community). On the other, teaching environmental responsibility should be already addressed in schools by teaching the roots of participatory and “deliberative democracy” (Devaney et al., 2020). This means favouring action over knowledge (Alexander et al., 2021) in a way that explicitly addresses personal and societal principles and morals, but also interdependence and interdisciplinarity (Olsen et al., 2020) in the face of the rapidly changing world.

Citizen participation is, in all discourse, particularly in the field of the environment and sustainable development, a deliberative aim whose objective is the formation of an enlightened public opinion, preparing future generations to participate in the consultative processes (Parra et al., 2020), and acting in an emancipatory aim which seeks to integrate collective awareness to induce the transformation of socio-environmental realities (Cachelin & Nicolosi, 2022) which are taking the society on the path to an environmental crisis (Levy, Oliveira, & Harris, 2021). This is not a question of transmitting knowledge or facts (Devaney et al., 2020) so much as teaching a new generation how to apply that knowledge so that they can appropriate the different principles and values (Aarnio-Linnanvuori, 2019) and understand how they can contribute to tackling the emerging issues.

The difference between teaching environmental science and teaching environmental responsibility
The notion of responsibility is often attached to a context or an area (Cachelin & Nicolosi, 2022), namely moral and social responsibility (Alexander et al., 2021) with respect to our surroundings and the resources we need for our survival. In the beginning, parents, the extended family, and the community are largely responsible for setting the children on the right path, who, upon being formally educated, would then become responsible adults (Levy, Oliveira, & Harris, 2021) who understand how their contribution (or lack of) affects society at large. At the same time, parents teaching environmental responsibility early on is not a guarantee (Zhang & Gibson, 2021) that they will become socially and environmentally responsible adults. And, along the same lines, a lack of awareness about environmental responsibility in childhood does not mean that, through the course of their life, these persons would not find their own path to environmental awareness and responsibility.

The case of environmental responsibility, as opposed to teaching environmental science, requires taking ethics into consideration (Aarnio-Linnanvuori, 2019), meaning responsibility which becomes a societal requirement (Parra et al., 2021), widely accepted as a norm. A good example of this is the recent shift in environmental consciousness (Olsen et al., 2020) and the subsequent actions which are no longer seen as radical but mainstream. As such, teaching ethics in schools might refer to constructing an ethical frame of mind (Aarnio-Linnanvuori, 2019), so that students gradually understand how their everyday habits impact the environment. For example, this means teaching them to be ethical consumers, teaching them about carbon footprint, how to calculate it, and how to reduce their emissions (Franch, 2020) by making alternative dietary, travel, and consumption choices. It can also refer to teaching them about their career prospects, and how to align their interests and ambitions (Lehtonen, Salonen, & Cantell, 2019) with environmental responsibility and sustainability. Teaching environmental responsibility as such should combine ethics within the educational process through self-realization (Zhang & Gibson, 2021) and a critical view of societal norms (Dewi & Primayana, 2019).

Another aspect of teaching considers teaching the psychosocial attitudes (Cachelin & Nicolosi, 2022) that favour responsibility. This refers to teaching how to forge strong connections in both the human and the non-human environment (Lehtonen, Salonen, & Cantell, 2019), especially the importance of the connection to the rest of the living world as a way of its safeguarding and preservation. For example, teaching students to be empathetic and how to relate (Aarnio-Linnanvuori, 2019) to the living world would result in mindful and considerate (Olsen et al., 2020) future citizens. This would help contribute to their abilities to build a favourable perception of the environment (Zhang & Gibson, 2021) and, ideally, internalize the imparted values and principles.

Curriculum
Teaching environmental responsibility means simultaneously focusing on making the students understand the management of the limited collective natural resources (Parra et al., 2020) using awareness that goes far beyond disciplinary teaching. In other words, it is not a matter of merely relaying scientific facts (which the courses in natural sciences already do), but combining action and critical reflection (Olsen et al., 2020). Current educational actions are often based on the aspect of responsibility countered by guilt (Aarnio-Linnanvuori, 2019), meaning transferring responsibilities to subsequent generations (Lehtonen, Salonen, & Cantell, 2019). However, French (2020) and Aarnio-Linnanvuori (2019), among others, distinguish between education “about” the environment, and education “for” the environment. In this way, in the context of teaching, responsibility means taking an interdisciplinary approach (Olsen et al., 2020).

In addition to curricula in science and technology, teaching environmental responsibility refers to making students understand the production of energy (Parra et al., 2020), sourcing, access, conservation, and especially renewable energy sources. It also entails understanding the natural resource cycles, responsible waste and wastewater treatment (Parra et al., 2020) and long-term resource management (Lehtonen, Salonen, & Cantell, 2019), the impact of environmental pollution and degradation on the quality of water, soil, and air (Dewi & Primayana, 2019). While considered by most to already fall within the domain of science education, teaching responsibility specifically means referencing the social values and the aspects of applicability to human needs (Cachelin & Nicolosi, 2022). More precisely, it involves a critical analysis of the vision of a sustainable future and of the significance of the practices associated with it that both current and future generations have to exercise (Alexander et al, 2021). Such a critical approach has not sufficiently penetrated the field of educational practices (Devaney et al., 2020). Teaching environmental responsibility means teaching students to analyze the values ​​(Zhang & Gibson, 2021) and the underlying practices that result from it.

Teaching eco-citizenship to new generations
With regard to environmental education or education for sustainable development, teaching environmental responsibility appears to be at the crossroads of environmental education, thus giving rise to the proposal of eco-citizenship (Levy, Oliveira, & Harris, 2021). Namely, teaching environmental responsibility must contribute to the emergence of citizen awareness and acknowledgment (Devaney et al., 2020), inserting them into a body of knowledge (Franch, 2020), and being able to convince, independently make informative and responsible decisions, and to act by contributing to the local and global communities in a meaningful way. By stimulating critical thinking, teaching environmental responsibility can be an exercise of an enlightened democracy (Parra et al., 2020) for future citizens.

With a view to enriching environmental science education with socially responsible citizenship (Devaney et al., 2020), and more specifically the establishment of the future eco-citizen (Franch, 2020), taking socio-ecological issues into account requires the integration of objectives (Cachelin & Nicolosi, 2022) into science curricula, such as education relating to the pressing environmental issues and the need to act (Zhang & Gibson, 2021) in order to avert the climate crisis. The current context of understanding the roots, causes, and consequences of environmental issues (Alexander et al., 2021) requires reframing not only science education but education in general. New proposals have been directed at focusing educational effort, not only on the relationship to the environment, but also on the strengthening of the network of relationships in the living environment (Zhang & Gibson, 2021) aimed toward the goal of sustainability.
Educating students so that they would become environmentally responsible adults requires imparting principles and values (Parra et al., 2020) so that they understand their role in society (Devaney et al., 2020), not only in the future as adults, but right now, as students, and as young citizens. This has to take on a particular dimension, as, to be clear, education with the aim of creating environmentally responsible citizens of the future is different from education for sustainable development. Considering education as a fundamental instrument for the protection of the environment and for sustainable development, would ideally incorporate the principles of environmental responsibility into education for sustainable development in formal education systems (Aarnio-Linnanvuori, 2019).

Conclusion
When it comes to teaching environmental responsibility, the main challenges lie in the existing tensions that are at work between research aimed at imparting knowledge (Zhang & Gibson, 2021), understanding and action expressed through the curriculum (Dewi & Primayana, 2019), and teaching commitment and environmental consciousness (Aarnio-Linnanvuori, 2019). Teaching this as an interdisciplinary subject matter (Olsen et al., 2020) falls within the scope of socially active issues, which are also considered an educational issue for the citizens of the future (Zhang & Gibson, 2021) who will, with the right frame of mind, and the right knowledge, contribute to building a truly sustainable world (Parra et al., 2020). For example, when students are taught to be environmentally responsible, this concerns all aspects of their lives, including their future choice of a career, their personal and public life, as well as individual and collective consciousness (Franche, 2020). As such, the choice of an approach induces strong consequences if left unaddressed or inadequately addressed.
In the process of teaching, students should be encouraged by teachers to create a critical perspective on society and on the established modern societal norms (Cachelin & Nicolosi, 2022), namely the harmful practices on a mass scale, and be able to challenge in an informative and knowledgeable way (Alexander et al., 2021) the dominant discourses and practices. In conclusion, although it is a complex subject to tackle, and especially to implement, not only can environmental responsibility be taught, but it should be taught as part of the curriculum at all levels of education, and building upon the acquired knowledge.

References
Alexander, W. L., Wells, E. C., Lincoln, M., Davis, B. Y., & Little, P. C. (2021). Environmental justice ethnography in the classroom: teaching activism, inspiring involvement. Human Organization, 80(1), 37-48.

Aarnio-Linnanvuori, E. (2019). How do teachers perceive environmental responsibility? Environmental Education Research, 25(1), 46-61.

Cachelin, A., & Nicolosi, E. (2022). Investigating critical community engaged pedagogies for transformative environmental justice education. Environmental Education Research, 28(4), 491-507.

Devaney, L., Brereton, P., Torney, D., Coleman, M., Boussalis, C., & Coan, T. G. (2020). Environmental literacy and deliberative democracy. Climatic Change, 162(4), 1965-1984.

Dewi, P. Y. A., & Primayana, K. H. (2019). Effect of learning module with setting contextual teaching and learning to increase the understanding of concepts. International Journal of Education and Learning, 1(1), 19-26.

Franch, S. (2020). Global citizenship education: A new ‘moral pedagogy’ for the 21st century?. European Educational Research Journal, 19(6), 506-524.

Lehtonen, A., Salonen, A. O., & Cantell, H. (2019). Chapter 11: Climate change education: A new approach for a world of wicked problems. In Cook, J.H. (Ed.) Sustainability, Human Well-being, and the Future of Education (pp. 339-374). Cham: Palgrave Macmillan.

Levy, B. L., Oliveira, A. W., & Harris, C. B. (2021). The potential of “civic science education”: Theory, research, practice, and uncertainties. Science Education, 105(6), 1053-1075.

Olsen, S. K., Miller, B. G., Eitel, K. B., & Cohn, T. C. (2020). Assessing teachers’ environmental citizenship based on an adventure learning workshop: A case study from a social-ecological systems perspective. Journal of Science Teacher Education, 31(8), 869-893.

Parra, G., Hansmann, R., Hadjichambis, A. C., Goldman, D., Paraskeva-Hadjichambi, D., Sund, P., … & Conti, D. (2020). Education for environmental citizenship and education for sustainability. In Conceptualizing Environmental Citizenship for 21st Century Education (pp. 149-160). Cham: Springer.

Zhang, H., & Gibson, H. J. (2021). Long-term impact of study abroad on sustainability-related attitudes and behaviors. Sustainability, 13(4), 1953-1971.