Plagiarize this!

Plagiarism is bad.

Do you know how many people I just plagiarized, by writing “Plagiarism is bad”?

I understand the idea behind plagiarism, and I agree that the intentional misrepresentation of the authorship of work is disreputable; however, I have to ask, in a day and age of digital communications, when has my exact thoughts not been thought of before, when have my exact words not been used?

As a poet, I think it may be easier (perhaps?) to pen a unique vision of the sublimity for any particular subject given the flourishing ability within language to use it in new and unique ways; however, when discussing in academia a common subject from a pragmatic perspective such as “breathing”, or “sailing”, or “bicycles”; it is hard to envision that after almost 6,000 years of spoken and written language, someone, somewhere hasn’t thought the same thing in the same way. So where do we stop?

In saying this, I can’t help but think that the increased use of tools such as online plagiarism scanners will cower creativity creating individuals scared to say single lines of prose without cross-referencing, annotating and indexing every single word.

Could you imagine the illegibility and annoyance of reading a paragraph such as this:

Stones are round (author, 1756). Stones can be used for shaping tools (another author, 1845). Stones can come in many different colors (third_author, 1935). Purple stones are often a type of stone called amethyst (scientist, 1878). Some stones are made up of compressed earth (again_another_author, 1967).

And yet, universities and academia have spread so much fear of reprisals around charges of plagiarism, that we, as students are almost afraid to open our mouths and speak, knowing that the very breath we expulse will exude, even unwittingly, plagiaristic tendencies.

Apart from all these concerns around the stifling of creativity and the fear of unknown and unexpected plagiarism accusations when the entirety of the world-wide-web becomes the standard in which plagiarism is judged; there is also the concern of privacy. Id Est: just what are these sites doing with my papers that I’m submitting? Are they keeping them, are they indexing them, are they annotating them, giving me credit for my combination of sentences that have never yet been seen? And if not, who is protecting my original cerebral outpouring from being plagiarized?

It was once said that “great minds assimilate” (and no, I couldn’t find any Google references to someone actually saying this, so please note, that this idea is mine now, make sure you reference me when using it in the future!). So in this future of watchers, great minds – I urge you to keep a notepad and pen handy and anytime and anywhere you hear an idea that sounds striking to you, do not hastily absorb it into your own thinking, for in fact, if at some point in the future, you find that you cannot remember where this novel idea came from in your own mind, to present it to anyone as an idea of your own will damn you to the hells of literary purgatory.

As for the idea of bettering the approach of checking all written documents against all world-wide-digital-media for any iota of a hint of possible plagiarism; my response is: “All hope abandon, ye who enter here (Alighieri, 1892).”

References

Alighieri, D. (1892). The Vision of Hell. London: Cassell & Company.

No standard for morals?

Should you argue about morality with someone that doesn’t believe in Morality?

if you ever meet up with someone who holds to the philosophical idea that we don’t really exist except in our mind (like the ideas of George Berkeley) – what point is there in even arguing (except for sophistry which is indeed fun).

In the same vein, if you find someone that doesn’t believe in a standard of morality they have nothing to offer in any discussions on morality, for all they can offer is the idea that there is no such thing as a standard for morality; which is indeed a statement of a standard for morality, and therefore it is self-defeating.

It’s like saying out loud "There is no such thing as sound"…

Can there be a universal morality

In my course of Ethics and Technology; and in my previous blog; i note that many ethical frameworks are coming to the conclusion that ethics in Information Technology are based on individualistic morals (due to the nature of technology), and that in order to have a governing ethical framework in technology, there must be universals (not unlike in real life). 

The question was asked “How does one find the standards for the universal” – I can’t answer that question in a 250-500 word essay; therefore – I provided only hints to my answer, without giving my answer.

C.S. Lewis writes in book one of Mere Christianity:

“Everyone has heard people quarrelling. Sometimes it sounds funny and sometimes it sounds merely unpleasant… They say things like this: ‘How’d you like it if anyone did the same to you?’ – ‘Come on you promised.’ People say things like that every day, educated … as well as uneducated… children… [and] grown-ups.”

“Now what interests me… is that the man who makes [these statements] is not merely saying that the other man’s behavior does not … please him. He is appealing to some kind of standard… and the other man seldom replies: ‘To hell with your standard.’” (Lewis, 2009, p. 257)

Lewis later states:

“Now this Law or Rule about Right and Wrong used to be called the Law of Nature… because people thought that everyone knew it by nature and did not need to be taught it… I know that some people say the idea of a Law of Nature … is unsound… but the most remarkable thing is this. Whenever you find a man who says he does not believe in a real Right and Wrong, you will find the same man going back on this a moment later… if you try breaking [a promise] to him, he will be complaining ‘It’s not fair’ (Lewis, 2009, p. 285).”

On the idea of whether or not there are universal laws, Lewis concludes:

“It seems, then, we are forced to believe in a real Right and Wrong (Lewis, 2009, p. 300).”

For me, I agree with Lewis, as he continues further on in book one: that in order to come to a conclusion of a universal set of right and wrong; one must find a standard to measure against. This standard must also exist necessarily outside of oneself in order to be appealed to universally. Therefore, I believe one of the most important decisions that can be made to help move towards a global view of Right and Wrong must first start with a common standard.

This of course, is where the difficulty begins; as all discussions of morality begin in trying to lay a foundation of moral framework (i.e. what is the standard to be considered right or wrong). Some more common frameworks are deontological, utilitarian and existential, all of which have their supporters and their oppositions.

As this topic is very complex; and cannot be addressed by anything short than a doctoral thesis; I will briefly say that I would have to state that a good starting point is the “Golden Rule”: Treat others as you want to be treated. This, I believe, at least points us in the right direction.

References

Lewis, C. S. (2009). Mere Christianity. HarperCollins e-books (Kindle Edition).

How to give children a moral compass in Cyberspace

Within cyberspace where people roam with little to no immediate governing restrictions, how does one impress on the youth and young adults that are developing their moral compass what is acceptable from a moral and ethical perspective?

Nancy Willard points out in her article Moral Development in the Information Age that the framework of the Internet has been designed such that it is disconnected and decentralized. As a result; no one agency can effectively police and dictate morality and ethical responsibility, therefore decisions are widely left up to individuals (Willard, 1997).

Because of four key factors that Willard points out in her article, namely: a) Lack of affective feedback; b) reduced fear of risk of detection; c) New environment requires new rules; and d) perceptions of social injustice (Willard, 1997), it seems that it is difficult for individuals to make the transition between the “real world” and the “digital world”. And because morality and ethics in cyberspace are driven mainly off of individual decisions, it becomes even more paramount that these issues be address during the growth and development of today’s youth (Willard, 1997).

As a result; I believe that first and foremost in order to expand morality and ethics into the Information Age, there must be an agreement to the ideals of universal propositions, like those defined by Turiel: concepts of Justice, rights and welfare (Willard, 1997).

Apart from this central foundation, according to further studies by Hoffman and Baumrind, in order to teach internalized moral responsibility, it is imperative that parents, teachers and other influential men and women begin to help children and young adults focus on the consequences of their actions based on these universals, rather than focusing on the responsibility to follow a set of rules (Willard, 1997).

By means of this approach, we prepare the future generation to mature in their own idealizations of what is morally acceptable and unacceptable even in a world where boundaries are largely determined by individualistic principles (Willard, 1997), and where unexplored moral challenges present themselves frequently.

References

Willard, N. (1997). Moral Development in the Information Age. Retrieved July 30, 2009, from http://tigger.uic.edu/~lnucci/MoralEd/articles/willard.html

Information Ethics.. an interesting discussion

 

Where does one actually draw the line of what is right and wrong in technology ethics, and how does one make the decisions.  Are things really black and white?

What if, let’s say, you were asked by your employer to steal data from another organization to give it a competitive advantage?  I think most of us would say that it is unethical.  Now, what if your employer is the NSA or the CIA and you’re a covert operative, and you are being asked to steal information from an enemy that can give your country a competitive edge, or protect the safety and welfare of your country. 

Now, in this case, and in many respects, we’re starting to get into territory that isn’t as black and white, I think many more people would be divided over this question than the original one.  But what is so different between the two scenarios that makes one so different than the other?

For example, in my job sometimes I am asked by an organization to execute penetration tests against their own organizational body.  So when when executing a risk assessment through penetration testing I call up the company, get a sweet gal on the other line of the phone, I make up some fictitious name, fictitious problem, and basically lie to her to deceive her into giving me secret and protected information. 

In so doing, I then build a report that outlines to the members of the organization where their weaknesses are, so that they can protect their systems against real hackers that would be out to deceive and retrieve real data for real harm.  But in this case, was it o.k. that I was lying and deceiving and breaking laws to prevent other bad people from lying and deceiving and breaking laws?

These questions in ethics aren’t necessary tied to Information Technology either; what about policemen that speed down the road so that they can get to the speed trap and catch speeders that are speeding down the road?

The intrigue of all these types of discussions is what so tightly draws me to questions of Information Ethics, and ethics as a whole.

What I’ve been working on the last ten weeks

Ok, so this is going to be really boring for almost everyone… but I am adding it just as a bit of a diary for myself…

 

Tiny bits of information, 0’s and 1’s, coursing through the veins of a mass amalgamation of wires and routers and computers and pupils and into brains. The world of information technology is indeed a marvelous place to become lost and wander. And yet, what lurks behind the monitors and CPU chassis, beyond the insulated blue covering of the cat 5e is the world of mathematics.

In the first week of our discrete math this spring quarter we began discussing algorithmic efficiencies. The goal was to answer the question of what makes one algorithm more efficient than another. To answer this question we studied various ways to compare the complexity and number of steps necessary to complete a computation.

We found that even in today’s world of memory that is measured in gigabits and with tiny nano processors still operating at millions of instructions per second, these operations still take time, and money; and while computers are growing faster, smaller and more powerful, the things we are trying to do with them become more complex and intriguing thus requiring even today’s chip and software designers to be cognizant of operational efficiency.

In chapter two we discussed different types of relations and functions and inductive proofs, laying the foundation for future topics around set theory and proving mathematical statements even when dealing with possible infinitives, like for example, how do we know that n2 is always less than 2n even if we don’t have the computational power and lifespan to execute this algorithm against all possible n’s. Again, knowing that computer processing is still limited to finite computations, this concept of dealing with sets in a finite manner, even when looking to solve problems that fringe on the infinite becomes very important.

Moving onto chapters four, five, six and seven (yes, for some reason we skipped chapter three on cryptography which would have been very exciting!) we began to discuss a collection of related vertices called graphs and networks. Within these chapters analyzed how to build graphs out of connected vertices, and how to analyze graphs for circuits and paths and determine the shortest paths from any given point on a connected graph. We discussed special graphs called trees, and examined different types of trees like rooted and binary trees. Once we analyzed various types of connected graphs and trees we discussed algorithmic ways to analyze the connectedness of these graphs, and learned to understand ways to match up different connected points on a graph (or tree) in the most optimal ways.

Again, discussing the need to remain efficient and small, all of these concepts surrounding graphs and matching and efficient paths between connected points are very important within the field of information technology, and the world itself. These techniques can be used for various things such as trying to find the most efficient way to get water to masses of people, trying to find the quickest and cheapest and most efficient route from point A to point B, trying to prioritize delivery of data packets and speed of delivery across a communication network, and the list goes on and on.

Chapter eight continued within the thoughts of set theory and matching. It expanded on the fundamentals of combinatorics and permutations, providing an understanding of how one can use mathematical algorithms to determine the matching and ordering capabilities within sets of values.

Chapter eight further led into a discussion of iterations within chapter nine consisting of details around functions being called recursively to display cumulative values such as compound interest. Iterations such as the Fibonacci recurrence were discussed, and we examined first-order linear difference and second-order homogeneous linear difference equations with constant coefficients. The purpose of this discussion was to once again go back and understand how algorithms with very large values or potentially infinite input and output can be executed within a finite state with the least number of functional operations.

And then we came to the final chapter: chapter ten. During chapter ten we began to discuss what interests me the most in the whole conversation of discrete mathematics: Finite state machines. We examined logic gates and integrated circuitry, bringing the discussions of algorithmic efficiencies from the ethereal world of non-tangible algorithms to building real world circuits at the hardware level.

During this ten week course, I chose to produce all of my weekly assignments in bits and bytes, utilizing Microsoft C# (a high level interpreted language) to produce input / output sequences understandable and interpretable by human eyes. While some weeks were more challenging than others, each week always presented itself with some new twist to try and understand how to represent some human defined problem in a way that circuits and numbers could operate on and still produce a meaningful output.

While none of my assignments required writing code efficient and stable enough to sustain life (like a ventilator or respirator apparatus) it was still often challenging in trying to produce the optimal output in the minimal number of steps, especially when required to present 3D type representations (like graphs and trees) in 2D technologies like bit streams and bytes. Additionally, there were some challenges to overcome when being faced with the limits of the size of numerical representation on a 64bit operating platform.

In closing, I have compiled a final project that presents in a single user interface of all functions and routines that I created throughout the ten week course. This course has provided the benefit of continuing to broaden my understanding of the fundamental concepts behind computational theory and technological efficiencies.

Project Files

The existential question of matmatics…

In our Discrete Mathematics University course, there was a discussion on the Knapsack problem (as it is called).

The problem goes like this:

A U.S. shuttle is to be sent to a space station in orbit around the earth, and 700 kilograms of its payload are allotted to experiments designed by scientists. Researchers from around the country apply for the inclusion of their experiments. They must specify the weight of the equipment they want taken into orbit. A panel of reviewers then decides which proposals are reasonable. These proposals are then rated from 1 (the lowest score) to 10 (the highest) on their potential importance to science… It is decided to choose experiments so that the total of all their ratings is as large as possible (Otto, Spence, Eynden, & Dossey, 2006).

After this outline, we’re asked to examine algorithmic variations that would allow us to postulate the most efficient experiments out of the 4096 possible variations that come about from the 12 possible experiments.

Isn’t it interesting how a mathematical question can become an existential question? While theoretically, one could evaluate the knapsack equation from a logical perspective, and get the ‘biggest bang for the buck’, one also has to wonder (if this were a real scenario) who assigned the rating values for these experiments, and what type of objective/subjective approach did they take?

For example: what if we had two experiments, one that would give us more information about cancer and one that gave us better insights to obesity. Most people might be inclined to include the research on cancer, as the rate of death directly attributed to cancer in the world is typically thought to be much higher than those attributed to obesity. However, what if the probable outcome of the research on cancer might move us a few years ahead in our research, but the research on obesity has a probable end goal of realizing the end of obesity within just a few years. What about all of the secondary causes of death that are indirectly linked to obesity. How does one decide the rating mathematically?

This seems to show that even while our capabilities of solving complex algorithmic variations using state machines can increase the efficiency of mathematical computation; the answer to Alan Turing’s fundamental question of whether or not a computer can ever make ‘human’ decisions seems to lie outside of the realm of algorithmic efficiencies!

References

Otto, A. D., Spence, L. E., Eynden, C. V., & Dossey, J. A. (2006). Discrete Matmatics – Fifth Edition. Boston: Greg Tobin.

This is why I have such a hard time with Math….

Do you think it’s possible to be too logical for math? Follow this thread below, and see my question and my professors response… It legitimately looks to me like you can’t figure out the order of operations in a word problem unless you know what you’re answer is supposed to look like… does that mean the rules of the order of operation doesn’t necessarily apply without some other external logical application?

 

I think this is why math frustrates me – I probably just over think everything. =(

 

My Original Question:

Content Author: Jed Logiodice

When determining BAC (page 34), the following word problem is given:

BAC = number of oz X % alcohol X 0.075 / body weight in lb – hr of drinking X 0.015.

To simplify the question let w = number of ounces, let x = % of alcohol, let y = body weight and let z = hours of drinking.

When the book gives the BAC equation of being:

w * x * .075 / y – z * .015

This can create an order of operation like this (w * x * .075 / y) – (z * .015) [which results in the answer the book is looking for], however, why could one not equally contrive the following equation out of the above word problem:

(w * x * .075)
___________
y – (z * .015)

The way the word problem is written, it appears equally valid to assume either order of operation – however, unless one assumes the first, the answer will not match what the book states it should.

Is there some rule of order of operations that I’m missing for word problems that says “Never use fractional notation, unless the question is asking for a fraction”?

Thanks!

 

My Professors Response:

(w * x * .075 divided by y) – (z * .015)
Note: I have added parentheses to show that we do ALL multiplication and division from left to right before any addition or subtraction.

w = 4 * 12 = 48 oz

(w * x * .075 divided by y) – (z * .015)
(48 * 3.2 * .075 divided by 190) – (2 * .015)
= (153.6 * .075 divided by 190) – (2 * .015)
= (11.52 divided by 190) – (2 * .015)
= (.060631578) – (2 * .015)

Remember, we do ALL multiplication and division from left to right before any addition or subtraction so our next step is to multiply 2 * .015

= (.060631578) – (2 * .015)
= (.060631578) – (.03)

= .030631578

Rounded to the nearest thousand (3 digits to the left of zero), we have .031 as our answer

 

My Follow up Question:

Author: Jed Logiodice

But when I read the statement I saw this:

(w * x * .075)
____________
y – (z * .015)

instead of this (w * x * 0.75 / y) – (z * 0.15).

i.e. how was one to know that it was intended to be a linear equation (where the rules of operations went across from left to right, instead of above and below the division line separately).

I really thought that (w * x * .075) was the dividend and (y – (z * .015)) was the divisor…

Does that make sense?

I know it might seem like a foolish question; but I literally spent like 20 minutes doing that question over and over and over and never getting the right answer (but always getting the same answer); until I accidentally figured out that it was just a single linear equation, and then I started to ask myself “How was I supposed to know that, other than just assuming, was there some clue I missed”?

My single biggest problem with math is that I way over-think things!

 

My Professors Follow up Response:

One should always assume that we should follow the order of operations unless brackets or parentheses or a fraction bar is in the formula. OK?

 

My Follow up Request:

 

Even in word problems?

Take for example this problem: If you take 6 eggs and divide them among 2 women and 1 man, how many eggs does each person have?

If we always keep the order of operations (without brackets in the sentence) then the answer is (6 / 2) + 1 = 4; 4 Eggs a piece is obviously the wrong answer in this case – although it meets the rule of the order of operations we’re describing.

However, it would seem more logical (and in this case correct) to do 6 / (2 + 1) = 2. This gives the right answer (which we can verify because we know what the value should be), but doesn’t follow our prescribed operational rule.

Taking this discussion back to the case of the BAC – the same logical argument could be applied to the word problem, causing one to interpret the problem with a numerator and a denominator as a fractional statement, rather than just a linear equation – but one wouldn’t necessarily know that the answer was wrong (and what real order of operation was intended), unless one knew what the answer was supposed to be…

So I’m still left wondering – how we can tell in a word problem like the BAC what the real order of operation is supposed to be – without knowing what the answer is supposed to be?

I apologize if this appears as sophistry… I’m legitimately trying to figure out why I had the wrong answer; when from my viewpoint the way I executed the problem was equally as accurate as the way the book did.

Perhaps I’m too logical for math? 🙁

So what came before that?

 

This thought came from a “What caused the Big Bang” type of discussion.

 

Something had to cause the Big Bang, unless the Big Bang always existed (which is not possible, as it would have always existed as a point of singularity unless acted upon by an outside force – so then the question would be where did that force come from, and you would end up in a impossible series of circular questioning).

 

So, when discussing the Big Bang – something caused it – it is not possible to have something come from nothing (ex nihilo nihil fit) [Out of nothing, nothing comes].

 

In order for something to come from nothing, it would have to create itself. And something would have to predate itself before it could create itself. That is, it would have to exist, before it existed. This is a logical impossibility.

 

Nothing has ever come from nothing – philosophically and logically speaking, if there was ever a point in existence where nothing existed, then nothing would still exist – and because we do exist, we know there was never a time when nothing existed (Thomas Aquinas makes this argument in his Quinque Viae).

 

In fact, not even God could create himself; therefore God must have always existed (which is a central claim to the Judeo-Christian doctrine).

 

Additionally, God would be changeless (RE: The same, yesterday, today and tomorrow) – another foundational claim to central Judeo-Christian teaching, and God would need nothing, He would be complete and whole in his personage, being able to exist eternally without input or output (another central claim to the Judeo-Christian doctrine).

 

🙂

Stellar Lifecycles – A final Paper

 

 

 

 

 

 

Stellar Lifecycles

PHY1000 SECTION 1

Monday, December 08, 2008

Jediah Logiodice

 

Contents

 

Introduction    3

Terra Mater – Surviving on Planet earth    3

Stellar Properties    5

Stellar Life    6

Conclusion    8

References    9

 

 

Introduction

 

One commonly held view of the creation of the universe states that “In the beginning, God created the heaven’s and the earth” (Gen 1:1 New International Version); another common view of the creation, while not contradictory, definitely less mystical goes a little something like this: “Bang!”.

Fast forward some 14 billion years, and zoom in billions of light years to this spiral galaxy called the Milky Way, into this cluster of planets within a solar system that surrounds a small, yellow dwarf sun, to a tiny little planet, that at first seems quite insignificant, and yet with a careful study of the universe it is found that creation has been tuned to bring about a species called humanity apparently for the very purpose of allowing humans to ask the most basic of fundamental questions like: “Where did we come from?”, “Why are we here?” and “Where are we going?”.

Terra Mater – Surviving on Planet earth

 

To begin our journey, we find that this planet maintains a very delicate harmony with aerated oxygen compounds, with nitrogen cycles, and with water cycles which provide a basic substance for life to flourish. These components all maintain coherence within an atmosphere that not only provides a base for these complex cycles, but also traps heat warming the surface and filtering out harmful radiation from bombarding the flora and fauna that has taken up residence.

On top of this atmospheric cocoon we find a magnetic shield also providing protection from harmful forms of radiation. We find a moon in harmonious dance, feeding into tidal waves that pull the oceans to and fro aerating the oceans and providing for a flourishing of oceanic life. And still, even further out, we have this star, called the sun that provides heat and warmth and the breath of life through photosynthetic planetary life. By whatever appropriate means you come to the final conclusion, it appears undeniable that the universe and everything within it was finely tuned to produce life. And thank goodness for that, or otherwise, I would not be here writing this paper, and you, in turn would not be reading it.

A further review of this tiny little planet would show that while most of these tiny little objects we call humans are busy scurrying around from day to day, unaware sometimes of how immaterial they really are, we also find that among these humans there are those that will pause, look up and think about what is out there, somewhere beyond the troposphere, beyond the stratosphere, the thermosphere, and even beyond the exosphere; far out in the dark night sky.

The story of this astronomical undertaking begins with such an individual; his name was Isaac Newton.

While there were many important names attributed to discoveries and classifications of astronomy long before Newton, like Johannes Kepler, who provided fundamental concepts around planetary motion, it was Isaac Newton who created three universal laws that explained motion on a grand scale. Newton’s laws were so fundamental to the understanding of the universe, that Newtonian Physics dominated the world of physics for a few hundred years, until the introduction of Quantum mechanics in the late 1800s.

 

Stellar Properties

 

While Newton’s version of Kepler’s third law of planetary motion was able to provide information about the mass of stars when found in a binary system, he had even more to offer within the world of astronomy than just the laws of motion, for it was Newton who first provided insights into the nature of light (Bennett, Donahue, Schneider, & Voit, 2007, p. 148).

Through advancements of the study of light (spectroscopy) that came later, scientists and astronomers found that through emission and absorption lines they could determine the chemical makeup of distant light producing objects (Bennett, Donahue, Schneider, & Voit, 2007, p. 162).

Additionally, by examining the spectrum provided by these objects in conjunction with observational laboratory studies of spectral lines of known chemicals, scientists could also determine if objects where moving towards our planet, or away from our planet, and could even determine how fast these objects were themselves rotating (Bennett, Donahue, Schneider, & Voit, 2007, p. 168). Another use for spectral lines was later found in categorizing the surface temperature of stars (Bennett, Donahue, Schneider, & Voit, 2007, pp. 508, 509).

Further investigations of stars provide detailed information about the stars luminosity and their apparent brightness. By measuring a stars visual brightness, and measuring a stars distance (e.g. through parallax) we can then determine how bright a star really is through the inverse square law.

And so, we find that Newton and his discoveries paved the way for understanding a stars luminosity, temperature, density, and chemical composition!

 

Stellar Life

 

As we look out into the night sky, we can tell, sometimes even with the naked eye, that not all stars are created equal. Based on a stars surface temperature, some stars produce reddish light, some stars produce white light, and some stars produce yellow light, and some stars may even produce blue light (Bennett, Donahue, Schneider, & Voit, 2007, p. 508). While some may be tempted to speculate that the more yellow and white stars are happier stars than the redder (angry) and bluer (sad) stars; for a star, brightness depends not on its cheery disposition, rather it depends on its most fundamental property at birth: mass.

From birth to death, a stars lifetime is strongly influenced by the mass it is first created with. The larger a star, the faster and hotter it burns, the heavier the elements it produces through its nuclear fusion process which are essential to life, and the more spectacular its final days of destruction will be.

While a massive star will end in a supernova that leaves behind a neutron star, smaller main sequence stars will most often outlast these stars by millions of years.

A main sequence star will begin by the compression of hydrogen and helium until the force of gravity heats the core enough to initiate nuclear fusion. The main sequence star will continue in this state through gravitational equilibrium for millions of years, which is the state that the sun is currently in.

Once the main sequence star has used up all of its hydrogen fuel, there is no longer enough outward pressure to keep the star from collapsing under the great gravitational weight. As the star begins to collapse inwardly, layers of hydrogen surrounding the collapsing core will heat up until the layers reach the point of nuclear fusion.

This will cause the star to expand becoming a red giant, which can, at its peak be “100 times larger in radius, and more than 1,000 times brighter in luminosity [than the sun] (Bennett, Donahue, Schneider, & Voit, 2007, p. 551).”

As the layers of hydrogen burn up, they will deposit helium into the shrinking core, which will continue to heat up. Once the helium core reaches 100 million Kelvin it will start nuclear fusion in the inner core as well.

Now that the star has both a helium nuclear active core and hydrogen nuclear active layers, eventually the star will undergo a helium flash, expanding the hydrogen layers, which will subsequently cool causing the star to produce less visible light.

Once the star has completely converted hydrogen to helium to carbon, nuclear fusion will cease, the star will cast off its outer layers in a brilliant show of lights called a planetary nebula, and all that will remain is a white dwarf. This white dwarf will continue to produce light until such time as it has cooled in the near distant future.

Both massive and not-so-massive stars have one thing in common: they create and recycle elements within the universe, and provide the building blocks that feed into the creation of existence of life on earth. They are a fundamental part of our circle of life.

 

Conclusion

 

In the end, we find that this massive beautiful universe as we can currently observe has played a significant role in the creation and maintenance of the very lives that we have been given. This very existence allows us to study and observe the universe, and should leave us within the fullness of wonder and awe.

However, without the capability to see beyond the stars and the universe as it exists, the scientific pursuit into origins ends at the moment of creation, and provides no further means to research these existential questions, and thus, within science alone, we are left in the state as if waking from “a bad dream (Jastrow, 1992, pp. 106,107).”

To build upon Einstein’s thoughts when he said: “the most incomprehensible thing about the universe is that it is comprehensible (BrainyQuote.com, 2008)”, I would leave you with the final question that remains unanswered and incomprehensible from a scientific perspective, and that question asks “why?”.

 

 

 

References

 

(2008). Retrieved December 08, 2008, from BrainyQuote.com: http://www.brainyquote.com/quotes/quotes/a/alberteins125369.html

Bennett, J., Donahue, M., Schneider, N., & Voit, M. (2007). The Cosmic Perspective 4th Ed. San Fransisco: Pearson Education, Inc.

Jastrow, R. (1992). God and the Astronomers. United States: Readers Library, Inc.