"Mathematics / Statistics" Essays

1234. . .
X Filters 

Statistical Analysis of Restaurant Patrons Term Paper

Term Paper  |  4 pages (1,636 words)
Bibliography Sources: 1


Statistical Analysis of Restaurant Patrons

What type of research question (ie: descriptive, comparative, relationship) is being asked by the researchers?

The research question being asked by the researchers is that of comparing the expression on a patrons' face in a restaurant being a predictor of the percentage of tip given to their waiter or waitress. The researchers are using a… [read more]

History of Pi Term Paper

Term Paper  |  2 pages (749 words)
Style: MLA  |  Bibliography Sources: 3


Greek Letter Pi Equations and Notations

Some of the most complex ideas and concepts came from the earliest history of mankind. For example, the notion of Greek letter pi, or the ratio between a circle's circumference and diameter, stems back early biblical times.

Algebra began its development in both the nations of Egypt and Babylonia about 1650 BC. However, historians remain uncertain as to whether or not new ideas traveled between these two countries. Written relics such as papyri and the Hammurabi clay tablets of this time indicate that algebra in Egypt was less sophisticated than that in Babylonia (Gullberg, 1997), in part because the it had a more primitive numeral system. It is also believed that the Babylon influences spread to Greece, 500 BC to 300 BC, then to the Arabian Empire and India, 700 AD, and finally to Europe,1100 AD (Baumgart, 1969)

The equations and notations that are applied today were first used around 1700 BC and standardized by about 1700 AD, primarily because of the invention of the printing press in 1450 and the ability of scholars to easily travel from one location to another. This helped spread ideas across the continents. However, there has never been complete consistency of algebraic notations and differences are still found in various areas of the world. For instance, many Americans use a period with decimals and Europeans use a comma, and thus 3,14 as an approximation for pi or 3.14 (Baumgart, 1969).

The concept of pi was also found in the Bible's Old Testament. For example, 1 Kings 7:23, says: "Also he made a molten sea of ten cubits from brim to brim, round in compass, and five cubits the height thereof; and a line of thirty cubits did compass it round about" (Blatner, 13), meaning, perhaps, that pi = 3. Scholars have debated about this verse for centuries, and they are not much close to knowing the truth now. Some people believe it is just an approximation and others argue.".. The diameter perhaps was measured from outside, while the circumference was measured from inside" (Tsaban, 76).

According to Tsaban (78), most of these scholars do not notice another use of pi that is more helpful: In Hebrew, each letter equals a certain number,…… [read more]

Statistical Language Term Paper

Term Paper  |  2 pages (644 words)
Style: APA  |  Bibliography Sources: 2


¶ … larger population of cases. Term used to represent the population under study.

Population- set of cases from which a sample is drawn and to which a researcher wants to generalize from.

Frequency- symbolized by f, this is the number of cases with a particular value of a variable, or values of two or more variables.

Measures of Central Tendency- representative values for a set of scores that are considered averages of univariate information.

Mean- arithmetical average of all scores; the sum of cases divided by the number of cases.

Median- value that divides an ordered set of scores in half.

Mode- most frequently occurring score on a variable.

Measures of dispersion- the distribution of statistical frequency; distribution about an average or median

Standard deviation- measure of variation in scores. It is also the square root of the variance.

Range- extent of the frequency distribution; the difference between the minimum and maximum value in a frequency distribution

Variance- square of standard deviation; statistical measure of the spread or variation of a group of values in a sample

Standard error- the standard deviation of a sampling distribution.

Descriptive statistics- refers to methods for summarizing information so that information is more intelligible, more useful or can be communicated more effectively.

Inferential statistics- refers to procedures used to generalize from a sample to the larger population and to assess the confidence we have in such generalizing.

Independent variable- variable determining the value of others; the variable in a mathematical statement whose value, when specified, determines the value of another variable or other variables

Dependent variable- an element in a mathematical expression that changes its value according to the value of other elements present

Confounding variable- variable that may be confused for the independent variable; commonly makes researchers fail to distinguish between the independent variable and confounding variable

Sampling- the process of selecting a sample group to be used as the representative or the random…… [read more]

Derivatives Calculus Term Paper

Term Paper  |  2 pages (591 words)
Style: MLA  |  Bibliography Sources: 2


Mathematics: Derivatives

Derivatives: an Explanation

Derivative" is a mathematical answer to the question, "how quickly does it change?" For instance, if one noted that the national debt was changing rather quickly, one could also say that the national debt had a high derivative. If one specified and went on to say that the national debt was rising rather quickly, one could also say that the national debt had a high, positive derivative. It follows that if the national debt were falling rather quickly (although that is unlikely to happen), one could also say that the derivative of the national debt had a high, negative derivative.

When working with derivatives, it is important to avoid ambiguity. While most would assume that a high derivate was positive, the word "high" is not mathematically defined. For that reason, a certain vocabulary should be used when working with derivatives to ensure effective communication. The words "high" and "low" should be discarded in favor of well-defined terms like "negative" (below zero) and "positive" (above zero).

Establishing that vocabulary begs the question: what if the derivative is zero? If a derivate is the answer to the question, "how quickly does it change," and the answer is zero, that must mean it didn't change at all. Therefore, if one were to say that the national debt was stable, or not changing, then one could also say that it had a derivative of zero.

Using some basic concepts from algebra, another definition for "derivative" can be reached. A common tool is algebra is a graph, a system that plots points based on their value. Each point has two values, labeled "X" and "Y" respectively, and the point is located "X" units to the right (if positive) or left (if negative) of the origin…… [read more]

William Gosset Term Paper

Term Paper  |  3 pages (960 words)
Bibliography Sources: 1+


William Gosset

William Sealey Gosset was one of the leading statisticians of his time, particularly with his work on the concept of standard deviation in small samples. His theories which were published under the name of "student" are still used today in both the study of statistics and the practical application.

Gosset was born on June 13, 1876, in Canterbury, England to Colonel Frederic Gosset and Agnes Sealy Vidal. Gosset was well educated from the beginning first at Winchester, a prestigious private school, then at New College at Oxford. He received his degree in mathematics in 1897, followed two years later by a degree in chemistry (O'Connor and Robertson). It was the combination of these two fields of study that gave Gosset a career and an opportunity to create his theory.

Upon graduation, Gosset was hired as a chemist by the Arthur Guinness and Son Company in Dublin. Working in the brewery, required Gosset to constantly attempt to find the best varieties of barley for use in the production of Guinness. This was a complicated procedure of taking small samples to determine the best quality product. Gosset continuously played around with the results of various samples of barley in order to find ones of the best quality with the highest yields that were capable of adapting to changes in soil and weather conditions. Much of his work was trial and error both in the laboratory and on the farms, but he also spent time with Karl Pearson, a biometrician, in 1906-07, at his Galton Eugenics Laboratory at University College (O'Connor and Robertson).

Pearson assisted Gosset with the mathematics of the process. Gosset published his findings under the name of "student" because the brewery would not permit him to publish. The brewery feared that trade secrets would get out if information about the brewing process was published. Consequently, Gosset had to assume a pseudonym even though his information would not have impacted the business in the way the brewery was concerned (O'Connor and Robertson).

Gosset published his work in an article called "The Probable Error of a Mean" in a journal operated by Pearson called Biometrika. As a result of Gosset's pseudonym, his contribution to statistics is called the Student t-distribution. Gosset's work caught the attention of Sir Ronald Aylmer Fisher, a statistician and geneticist of the time. Fisher declared that Gosset had developed a "logical revolution" with his findings about small samples and t-distribution (O'Connor and Robertson).

In his work with the barley for the brewery, Gosset was concerned with estimating standard deviation for a small sample. A large sample's standard deviation has a normal distribution. However, Gosset did not have the luxury of working with large samples. He had to find a way to determine the standard deviation for a small sample without having a preliminary sample to make an estimate. Gosset developed the t-test to satisfy this need.…… [read more]

How Math Explains the World Term Paper

Term Paper  |  4 pages (1,403 words)
Bibliography Sources: 2


¶ … Math Explains the World

The title of James Stein's book, How Math Explains the World, is, perhaps, a bit deceptive. The reader who is expecting simplified explanations of complex mathematical principles will be disappointed. Although Stein has simplified many concepts, they will still be challenging for the reader who struggled with math in high school or who took… [read more]

Normal Distribution Curve Essay

Essay  |  2 pages (593 words)
Bibliography Sources: 4


This equality is a constituent of the normal curve and something that makes it helpful to psychology in that different measurements can be based on the normal curve and applied to varying situations.

The Z score, for instance, are raw scores that are converted to units of standard deviation. These, because of the nature of the normal distribution, can be converted to percentiles or to other scores of measurement if necessary.

The normal distribution is the shape that happens to occur the most often in describing a population, and it is lucky that it does because it can be precisely quantified by mathematical equations. IQ scores are an ideal example of a normal distribution where you have the greatest frequency (or mean) in the middle with frequencies tapering off on either side.

Because it occurs often and because the shape is mathematically guaranteed, parametric (i.e. those based upon a normal distribution) studies and statistical tools are often more reliable than are non-parametric.

The normal distribution, finally, is also important in statistics since not only does it state that under certain conditions (mild and commonly the most frequent), the sum of a large number of random variables is distributed normally, but it is also a convenient choice for modeling a large variety of random variables that are generally encountered.

Moreover, of all distributions the normal distribution is the only absolutely continuous distribution who cumulates other than the mean and standard deviation are zero.


Cassela, G. & Berger, R. (2001). Statistical inference. UK: Duxbury.

Gravetter, F,. & Wallnau, L. (2007). Essentials of statistics for the behavioral sciences. Thamson Wadsworth, USA.

Weinbach, RW, & Grinnel, RM. (1991).…… [read more]

Correlation and Causation Understanding Essay

Essay  |  3 pages (1,147 words)
Bibliography Sources: 1+


Correlation and Causation

Understanding correlation

Within any population the variables that concern a researcher will hold different values. This difference in value for any variable becomes the basis of different types of analysis, which go beyond simply counting categories of the phenomenon. This type of analysis engages the use of variation to make statements about the nature of the relationship between variables. One of the ways to measure the association between two variables is the use of correlation. Correlation is consequently a useful tool that provides a quantitative measure of the presumed relationship between two or more variables.

Correlation therefore is a statistical technique that provides a numerical or quantitative assessment of the degree to which two variable co-vary. The idea of association is tied to the concept of co-variation. Co-variation occurs when two variables change values. This changing of values is a conceptual association that exists as a consequence of the way in which we try to make sense of the world. Within the mind of the observer it is possible to consider that the presence of y is linked to the presence of y. This linking is as a consequence of observing instances of x and seeing instances of y existing within close proximity to y. One may observe that changes in the diet may result in the loss or gaining of weight. This observation forms the basis of common understandings about the relationship between things. What scientist have attempted to do is to measure the strength of that relationship, thus providing a number that can be compared to other numbers to indicate different features of the observed relationship.

The main way to represent a correlation is to use the correlation coefficient (r). The correlation coefficient is the product of a series of statistical calculations that are produced when either the Pearson's Product Moment Correlation or the Spearman Rho is computed. The correlation coefficient ranges in value from -1.0 to + 1.0. The larger the size of the correlation coefficient that is, (tending toward 1 or -1) the stronger the relationship between the variables being tested. Moderate correlations are understood to begin at around 0.6 and weak correlations around 0.4 these values may be positive or negative. If the correlation coefficient is 0 then that suggests there is no relationship between the variables being tested.

The positive and negative signs are very important in interpreting the correlation between two variables. While the number tells the magnitude or size of the correlation the sign before the number indicates the direction of the correlation. The direction of the correlation can be positive or negative. These directions are also known as a direct correlation and an inverse correlation (Cooper & Schindler 2011). With a direct correlation the values of both variables increase together. Consequently as the number of calories that an individual ingests increases their weight may also increase. The relationship that has been describe is a positive correlation, where as one variable increases the other decreases. In an indirect correlation… [read more]

Person Hired a Firm to Build Essay

Essay  |  1 pages (403 words)
Bibliography Sources: 1


¶ … Person hired a firm to build a CB radio tower. The firm charges $100 for labor for the first 10 feet. After that, the cost of the labor for each succeeding 10 feet is $25 more than the proceeding 100 feet. That is, the next 10 feet will cost $125; the next 10 feet will cost $150, etc. How much will it cost to build a 90-foot tower?

We see that there is a new price for every ten feet of tower. Each new price is $25 added to the previous price. Since repeated addition is involved, this is an arithmetic sequence. First, we need to identify the following numbers:

n = number of terms n = 9

d = the common difference

al = first term al = 100

an = last term an = a9

We know n = 9 because the tower increases in increments of ten feet, and the final height is 90 feet. 90/10=

To find the nth term of an arithmetic sequence, Page 271 of Mathematics in Our World gives us the following formula:

an = a1 + (n-1)d a9 = 100 + (9-1)

a9= 100 +8(25)

a9= 100 + 200

a9= 300…… [read more]

Guillaume Francois Antoine De L'hopital Term Paper

Term Paper  |  5 pages (1,595 words)
Bibliography Sources: 1+


Apparently, out of respect to the mathematician who made much of his fame possible, L'Hopital abandoned the project.

'L'Hopital was a major figure in the early development of the calculus on the continent of Europe" (Robinson 2002). During this time of scientific and mathematic enlightenment in Europe, and particularly in France, L'Hopital established himself as one of the world's premier mathematicians and book writers. It is noteworthy that many of the accomplishments L'Hopital is credited with have come into question over the years. Most obvious among these include the rule that is named after him, which every calculus student has been forced to memorize for the past three hundred years. Despite these questions, perhaps the most telling thing about L'Hopital is that he was widely accepted and respected by his peers. He became the third man on continental Europe to learn calculus simply because he impressed the man who later became his tutor. "According to the testimony of his contemporaries, L'Hopital possessed a very attractive personality, being, among other things, modest and generous, two things which were not widespread among the mathematicians of his time." (Robinson 2002). He died on the second of February, 1704, in Paris; the city of his birth.

Works Cited

1. Addison and Wesley. Calculus: Graphical, Numerical, Algebraic. New York: Addison-Wesley Publishing, 1994.

2. Feinberg, Joel and Russ Shafer-Landau. Reason and Responsibility. Boston: Wadsworth Publishing, 1999.

3. Goggin, J. And R. Burkes. Traveling Concepts II: Frame, Meaning and Metaphor. Amsterdam: ASCA Press, 2002.

4. Greenberg, Michael D. Advance Engineering Mathematics: Second Edition. Delaware: University of Delaware, 1998.

5. O'Connor, J.J. And EF Robertson. "Blaise Pascal." JOC/EFR. December 1996. School of Mathematics and Statistics, University of St. Andrews, Scotland.…… [read more]

Probability Statistics Term Paper

Term Paper  |  2 pages (735 words)
Bibliography Sources: 0


Probability: Its Use in Business Statistics

Business, one might say, is an exercise in probability. No one knows exactly what the market will do in the future, not even the most skilled analysts and prognosticators. One can only make educated guesses, and the use of probability models and statistics enables the professional to make such guesses, even though, no consumer behaves perfectly according to mathematical economic metric models. If used correctly, statistical analysis can be important guides that enable one to purse intelligent business practices and function as aids in the decision making process, even though they are only, ultimately projected 'guesses' as to how the economic environment will evolve, given a variety of variable factors.

Probability, in its most ideal mathematical form, attempts to make use of various concepts to determine what is likely to occur, given a particular set of variable circumstances. One of the most important uses of probability in business is to determine what a particular consumer market's spending habits are likely to be, given a particular set of events. For instance, if the Federal Reserve lowers interest rates yet again, and consumer spending is likely to increase, what is the most desirable course of action, in terms of production of a business that manufactures durable goods, if all other market aspects remain relatively unchanged? Probability theory can also be used to assess what to do if a new and potentially variable competitor advances into a market, pricing comparable goods competitively against one's own product line. What will consumers do, and how will the market behave, given these circumstances?

Probability theory thus deals with what is variable and also with what is unknown in projected circumstances or futures. One must know certain fixed attributes about the circumstances, such as certain fixed production costs, but the use of probability theory allows for the introduction of a set of uncertain or variable factors.

Thus, the use statistical probability attempts to project a variety of foreseeable futures, so the businessperson can prepare for the possible negative aspects of these foreseeable futures. These unknowns are represented, in equations, as variables or unknowns. Various scenarios can be plugged into these placeholders, represented as 'xy' in integral calculus functions.…… [read more]

Historic Mathematicians Term Paper

Term Paper  |  6 pages (2,172 words)
Bibliography Sources: 1+


Historic Mathematicians

Born on January 29, 1700 Daniel Bernoulli was a famous Swiss Mathematician. His father -- Johan Bernoulli was the head of mathematics at Groningen University in the Netherlands. His father planned his future so that Daniel would become a merchant. but, Daniel never wanted to become a merchant, as he his favorite was calculus. His father could not… [read more]

Chaos Theory Has Filtered Down Book Review

Book Review  |  6 pages (1,570 words)
Bibliography Sources: 1+


Chaos Theory has filtered down to the public through such short discussions of the issue as are found in films like Jurassic Park or on television documentaries. The issue are more complex than can be indicated in such media depictions, and two authors who have set out to explain chaos theory more thoroughly, though still in a popular vein, are… [read more]

Scores of First Born and Second Term Paper

Term Paper  |  3 pages (789 words)
Bibliography Sources: 0


¶ … scores of first born and second born children on the Perceptual Aberration test. Ha: There is a significant difference in mean scores of first born and second born children on the Perceptual Aberration test. The alternative hypothesis is one sided, since the null hypothesis is concerning non-directional data, in that we are not predicting directional information.

The data will be examined using an independent t test. This test is used since the groups in these circumstances are not related. If the samples were correlated, where each individual had two scores under two treatment conditions, the dependant t test would have been used.

The test statistic in this experiment represents the difference between the mean scores of the children tested divided by the standard error of the difference. This result would represent whether there was a significant difference between the means. If so, we would reject the null hypothesis. By using the t test, we are able to determine the ratio of the mean difference in test scores when compared to the error of differences in the means. A large mean difference does not guarantee a large t, hence the use of the standard error of difference.

1d. Following a .05 level of significance, and after calculating the df (92), the critical value needed to reject the null hypothesis is 2.367. Our calculations show that t=.529982. Thus, we would accept the null hypothesis because our calculated value (t = .529982) is less extreme than the critical values (2.367 or -2.367).

1e. The observed mean difference in test scores between first (M=17.2563) and second (M=16.14815) born children was not significantly different, t (92) = .529982, p>.05.

t =


4331.62 +1497.407






t = 1.14815 / SQRT (5829.037/92) * ((1/27) + (1/27))

t =

1.14815 / SQRT (63.3591 *.074074) = 1.14815 / 2.166395 = .529982

2a. H0: There is no difference in the population means of the different scores on recall of words testing following memory techniques. Ha: There is a difference in the population means of the different scores on recall of words testing following memory techniques.

2b. An independent two samples t test is not appropriate because our samples include more than two samples. We are comparing the results of three groups of subjects.

2c. The between groups ANOVA test statistic will measure if there is a difference…… [read more]

Kde and Kme Kernel Density Estimation (Kde) Term Paper

Term Paper  |  8 pages (2,601 words)
Bibliography Sources: 1+



Kernel Density Estimation (KDE)

Abstract-- Kernel Density Estimation KDE is also known as the Parzen Window Method, after Emanuel Parzen. Parzen is the pioneer of kernel density estimation. Density estimation entails constructing an estimate based upon observed data, where the underlying probability density function cannot be observed. A kernel in turn is used as a weighting function… [read more]

Operations Essay

Essay  |  4 pages (1,335 words)
Bibliography Sources: 3



As the first step, solving an equation requires combining like terms for the two expressions within the equation. In this case, like terms are those containing the same variable or group of variables that are raised to the same exponent despite of their numerical co-efficient. The second step is to isolate terms that contain the variable, which means getting terms containing that variable on one side of the equation while the other variables and constants are moved on the opposite side of the equation.

This is followed by isolating the variable to solve for that can result in obtaining a numerical coefficient. When a numerical coefficient of one is obtained following isolating the terms containing the variable to solve for, the variable was automatically isolated. The fourth step for solving an equation is substituting the answer into the original question in order to ensure that the answer is correct. In this case, substitution is a process of swapping variables with expressions or numbers as part of checking the answer to ensure it is correct. When solving an equation and explaining how to solve an equation, the most important factor to consider are the variables in the equation. This is primary because the variables in the equation play an important role in determining the accuracy of the process. The variables should also be critically considered because they help to determine whether the right or incorrect answer will be obtained.

Four Steps for Solving a Problem:

In most cases, mathematical problems usually require established procedures as well as knowing the procedures and when to apply them. Moreover, the process of learning to solve a mathematical problem is generally knowing what to search for. In order to identify the necessary procedures for solving an equation, an individual needs to be familiar with the problem situation, gather the appropriate information, and identify and use the strategy appropriately. While there are various steps for solving a problem in mathematics, effective problem solving requires more practice (Russell, n.d.).

The first step for solving a problem is looking at the clues through reading the problem carefully and underlining the clue words or phrases. When looking for clues, it may be important to examine if the person has encountered a similar problem in the past and what was done in that situation. The second step is defining the game plan, which involves developing strategies for solving the problem. During this process, the various strategies developed can be tried out in order to identify the effective one.

The third step in the process is to solve the problem suing the already identified strategy in the second step. The appropriate strategy that is used to solve the problem is normally identified by trying out various strategies. The fourth step is reflecting on the solution to the problem to examine whether it's probable, it solved the problem appropriately, and it answered the problem using the question in the language.

When solving a problem or explaining how to solve a problem,… [read more]

Five Process Standards Term Paper

Term Paper  |  3 pages (1,149 words)
Bibliography Sources: 1


¶ … Standards

Five process standards

Describe the mathematical process standards

Problem solving

Engaging in a task without knowing the solution method in advance is what is referred to as problem solving. Drawing from their knowledge, the students are better equipped to find a solution for the problem, and while doing this the students will develop a new understanding of mathematics. The students are also able to solve any other problems they encounter both in mathematics and other life situations using their problem solving skills for example, "I have pennies, dimes, and nickels in my pocket. If I take three coins out of my pocket, how much money could I have taken?" Mathematics, 2000()

Problem solving involves the application and adaptation of various strategies to assist the student in solving problems.

Reasoning and proof

To gain better understanding on a wide range of phenomena, a student will need to have a strong mathematical reasoning and proof. Thinking and reasoning analytically allows a person to identify structures, patterns, and regularities in symbolic objects and real world situations. To better understand mathematics, a student needs to be able to reason. A good example is "Write down your age. Add 5. Multiply the number you just got by 2. Add 10 to this number. Multiply this number by 5. Tell me the result. I can tell you your age." Mathematics, 2000()

Students are able to better evaluate and develop their own mathematical arguments by employing reasoning and proof.


For the teaching of mathematics, communication is an integral part. It provides an avenue for the students and lecturers to share ideas, and make clarification where necessary. Challenging students to communicate their mathematical results and reasoning will help them learn to justify themselves in front of others, which leads to better mathematical understanding. Working on mathematical problems with others and having discussions will allow students to gain more perspectives when solving mathematical problems e.g. "There are some rabbits and some hutches. If one rabbit is put in each hutch, one rabbit will be left without a place. If two rabbits are put in each hutch, one hutch will remain empty. How many rabbits and how many hutches are there?" Mathematics, 2000()


A students understanding is deepened when they are able to connect mathematical ideas. By continuously developing and teaching students' new mathematics that are connected to what the students had learnt previously, the students are able to make connections. Learning mathematics by working on problems that arise from outside mathematics should be incorporated into the curriculum. These connections will give the students an opportunity to connect what they learn in relation to other subjects or disciplines. Mathematics is connected to many other subjects, and it is very important that students get to experience mathematics in context.


Proper and easy representation of mathematical ideas assists people to better understand and use these ideas. For example, it is very difficult to do multiplication using roman numerals than it is to use Arabic base-ten Mathematics,… [read more]

Inferential Because it Makes Claims A-Level Coursework

A-Level Coursework  |  3 pages (946 words)
Bibliography Sources: 4


¶ … inferential because it makes claims about the population of adult Americans based on a sample of 9000 persons. The use of a sample proportion to estimate the population proportion makes this study inferential (Gravetter & Wallnau, 2008).

The research question in the study was; does the lack of health care increase the risk of death?

The data were obtained using a survey of 9000 persons tracked by the U.S. Centers for Disease Control and Prevention. While it is not explicitly stated in the article that the data was collected using a survey the tracking of individuals could not be done using an experimental design. Additionally, the purpose of the study suggested that it would engage in a correlational approach to explicating the problem.

The exclusion of persons aged 65 and over is an attempt to eliminate bias, as the inclusion of these persons would create a systematized form of error within the study. These older Americans receive health care through Medicare.

5. The conclusions drawn from the article are warranted because they are logical. Firstly persons who do not access medical attention will die from aliments that require medical attention. Secondly, the design of the study and the sample used are representative of the country and it would be legitimate to use such a sample. Finally, the design of the study followed a similar study done in 1993, therefore there is methodological support for the approach employed.

6. A large trial is necessary to ensure that the sample would be representative of the population. This representativeness means that the sample is similar to the population in key characteristics and the is little difference between the sample proportions and the population proportions. The error in the sample is therefore small.

7. A control group is needed to ensure that non-spuriousness is address adequately in the study. Using a control group means that the researcher is certain that the independent variable has the stated effect on the dependent (Lenth, 2001).

8. The double blind feature takes care of the propensity of human error to seep into the study. When both the participants and the researcher is unaware of which group is the treatment group, other variables that can have an effect on the study are controlled for.

9. The use of volunteers would have biased the results as that would have introduced systematic error into the study. The generalizability and validity of the study would be called into question (Creswell 1994). It is only through randomization that random error can be statistically determined.

Chapter 2









A random sample is similar to a convenience sample and a systematic sample only as they select member of a population for investigation. All methods of sampling will contain error. It is different because with a random sample there is…… [read more]

Frequency Distribution Below Shows Research Paper

Research Paper  |  3 pages (870 words)
Bibliography Sources: 3


frequency distribution below shows the distribution for suspended solid concentration (in ppm) in river water of 50 different waters collected in September 2011.

Concentration (ppm)


What percentage of the rivers had suspended solid concentration greater than or equal to 70?

Total samples (N) =50. (7+2+2)/50=0.22. 22% have a concentration of 70 or greater.

Calculate the mean of this frequency distribution.

Midpoint for each concentration group is determined and multiplied by frequency; results summed and divided by N (50). Mean = 57.1

In what class interval must the median lie? Explain your answer. (You don't have to find the median)

The median must lie in the 50-59 interval as this is where the middle data points (25 and 26) would fall in this data set of 50 points (there are 17 point before this group and 23 after; though on the lower end, this group contains the median data points).

Assume that the smallest observation in this dataset is 20. Suppose this observation were incorrectly recorded as 2 instead of 20. Will the mean increase, decrease, or remain the same? Will the median increase, decrease or remain the same? Explain.

The mean of the raw data set -- that is, not of the frequency distribution -- would change based on this incorrect record somewhat significantly. The mean of the frequency distribution would change very slightly if the one observation in the 20-29 category simply disappeared, and even more slightly if the lowest group was altered to include this point (the midpoint used to estimate the mean would drop to 14.5). The median would remain the same, however, as moving the lowest data point would not change the order of the data or the position/identity of the central data point(s).

Refer to the following information for Questions 5 and 6.

A coin is tossed 4 times. Let a be the event that the first toss is heads. Let B. be the event that the third toss is heads.

5. What is the probability that the third toss is heads, given that the first toss is heads?

If it is already given that the first toss is heads, there is a 0.5 probability that the third toss will be heads -- the same as for any standard coin toss. Though the overall probability of both a and B. occurring is 0.25 (0.5*0.5), knowing a has already occurred gives B. its natural and independent probability.

6. Are a and B. independent? Why or why not? Each coin toss is independent as it is not influenced by previous tosses -- a previous heads does not actually change the coin…… [read more]

Create and Analyze a Self-Designed Fictitious Act vs. SAT Scores of Low Income Students Research Paper

Research Paper  |  3 pages (840 words)
Bibliography Sources: 3


Score Stats

A Statistical Analysis of ACT vs. SAT Scores of Low Income Students

Study Description

Apparent income disparities in standardized test scores have been noted in many previous studies, with the determination that the income level of a student's family -- along with other sociocultural factors -- has a major effect on their ability to achieve on standardized tests (Kohn, 2002). For tests like the ACT and the SAT, which are commonly (almost universally) used by colleges and universities in the United States as part of their admissions criteria and decision-making process, a gap in performance caused by income levels puts low-income students at a significant disadvantage for entry into four-year degree programs, which in turn limits earning potential and thus could in fact perpetuate lower income levels (Kohn, 2002). This study set out to determine if there is a significant difference in the ACT test scores of low-income students when compared to the SAT scores of the same student population, as a means of determining if test composition can mediate or make more pronounced any impacts of income standing on student performance. 30 students who completed both the ACT and the SAT tests and who matched income criteria of living at or below 150% of the poverty level were included in the study, with their total scores on both tests compared in order to determine if a significant difference exists. This would indicate that something in the test structures(s) worked to influence the impact that a low-income background has been observed to have on standardized test scores.

Statement of Hypothesis

The null hypothesis is that there will be no difference between the means. The alternative hypothesis, which is the hypothesis this study is investigating, is that there is a significant difference between the mean scores on the ACT and the SAT, indicating that test structure can determine the degree to which income impacts test scores.

Variable Description

Income level was one independent variable used to determine eligibility/inclusion for the study, with a family income of 150% of the defined poverty level the upper income limit. Subjects were randomly selected and were also polled for age and gender, though these variables were not analyzed further. Income level was not recorded past the point of inclusion, therefore figures for this data are not given; gender and age are given and a descriptive analysis was performed. Dependent variables of interest were test scores on the ACT and test scores on the SAT. These two sets of variables constitute the data points…… [read more]

Stat Notes Sampling Error and Standard Research Paper

Research Paper  |  2 pages (442 words)
Bibliography Sources: 0


Stat Notes

Sampling error and standard error of the mean (SEM) measure the error in assuming the sample accurately represents the population; the smaller the error, the more closely the sample can be assumed to match the population.

Confidence intervals (CIs) are ranges in which a value might fall and are limited by the confidence level -- the higher the confidence level (the degree of certainty desired), the larger the confidence interval will be to ensure the data point falls within it.

Null hypothesis is always that there is no effect of an intervention/variable -- that measured groups do not differ significantly on the measured area(s).

Alternative hypothesis states that there is an effect of the measured intervention(s)/variable(s).

Probability Sampling is used to obtain a study sample that is representative of the population. This rarely occurs with fully randomized sampling; reducing sampling error is key in yielding reliable and valid results. Because of the relationship between sample size and the standard error of the mean (SEM), a larger sample size means a smaller error.

Statistical inference refers to both an estimation of parameters such as the mean and other basic summary statistics of a data set/population, and to hypothesis testing.

Interval estimation is an estimated confidence interval.

Hypothesis testing includes the objective means of determining if a null hypothesis should or…… [read more]

Statistics in Criminal Justice Discussion and Results Chapter

Discussion and Results Chapter  |  2 pages (517 words)
Bibliography Sources: 2



Questions like this one make me wonder whether I should give an honest response or give the response that I believe is being sought by the question. In all honestly, my Minimal Statistics Baselines (MSB) is zero. I am honestly not committing to any MSB from this point forward; if I happen to be able to live without ever using statistical analysis again, then I will not be taking any action to attempt to incorporate statistics into my life. I do not intend to read newspaper articles with the purpose of understanding the statistics or look up research articles at a library for the purposes of understanding the statistical analysis.

However, while I have no intention of taking steps to have any type of MSB because of an intentional focus on statistics, I am well aware that I need to use and understand statistics to be able to function as an informed adult. Thinking about election season and the apparently at-odds poll results that are always being touted to support different issues, I realize that understanding how that data was obtained and analyzed is critical to being able to understand that information. As a parent, my child will take standardized tests, and I will need to understand basis statistics in order to understand what test scores mean. If I am ill and looking at potential therapies, I will need to understand the relative advantages and disadvantages of different treatment modalities, and understanding those requires understanding statistics. Depending on where my career takes me, I may find myself…… [read more]

SPSS Data Analysis Research Paper

Research Paper  |  3 pages (827 words)
Bibliography Sources: 3


However, to determine the strength of this relationship a Pearson's product-moment correlation coefficient (r) can be calculated for these two variables. Based on the SPSS results, there is a very strong, statistically significant correlation between hours and scores [r (18) = .967, p < .01, two-tailed]. The percentage of the variation in the dependent variable due to the independent variable is also very high (r2 = .934), which suggests that the average number of hours per week studied may be responsible for 93.4% of the final exam grade; however, a correlation cannot determine causality, only that there is a strong association between the two variables.

There are a number of potential ethical considerations concerning how the data was collected in this study. Of primary concern was the possibility that average hours studied could influence final grades for the course, but the professor collected this data at the end of the semester during the final examination. While there is still a potential ethical concern, the assumption is that the final exam was graded before the data was viewed and analyzed. Even so, most students would expect the hours data and final exam scores to remain confidential during the grading period.


The predictor variable (Y) is the dependent variable, which in this study were the final exam scores. The criterion variable (X) is the independent variable, or in this case the average hours of study per week. The form of the basic linear regression equation is Y = ?0 + ?1X1 + ?2X2 + & #8230; nXn + ?n, with ?0 representing the Y-intercept, ?1-n representing the slope, and ?n representing the errors of prediction. The values of the dependent variable can therefore be estimated by the regression equation: Y = 47.918 + 2.619*X, with ?1-n = 2.619, t (18) = 16.014, p < .0001. For example, if a student wanted to know what grade they might get if they studied 15 hours a week then he or she would substitute 15 hrs for X in this equation and get a predicted final exam score of 87. The accuracy of this prediction depends on the amount of variation in the data, which is the difference between the best fit line and the experimental values (Y -- Y' = ?) and is given as the standard error of the estimate (SEE = 3.842). These calculations allow students and professors to predict how much studying must occur on…… [read more]

Normal Distribution Central Limit Theorem and Point Estimate and an Interval Term Paper

Term Paper  |  3 pages (918 words)
Style: APA  |  Bibliography Sources: 3


Normal distribution is very much what it sounds like. This distribution is symmetrical and is shaped like a bell when graphed on the Cartesian plane. The normal distribution has the mean figure, the median figure and the mode all basically located at the same place on the distribution. This occurs at the peak and the frequencies will gradually decrease at both ends of this bell shaped curved.

Unfortunately this is simply a model of looking at a problem and no definite predictions can be made with this or any other statistical tool, however this model does have real practical value. Many things in life follow this model and are normally distributed offering a least a guide in how to best understand and predict behavior mathematically using statistics.

Suppose X is normal with mean ? And variance ?2. Any probability involving X can be computed by converting to the z-score, where Z = (X?

)/?. Eg: If the mean IQ score for all test-takers is 100 and the standard deviation is 10, what is the z-score of someone with a raw IQ score of 127?the z-score defined above measures how many standard deviations X is from its mean. The z-score is the most appropriate way to express distances from the mean. For example, being 27 points above the mean is useful if the standard deviation is 10, but not so great if the standard deviation is 20. (z= 2.7, vs. z= 1.35).

Question 2

The central limit theorem states that the distribution of the sum of a large number of independent, identically distributed variables will be approximately normal, regardless of the underlying distribution. The importance of the central limit theorem is very widespread as it is the reason that many statistical procedures work. Regardless of the population distribution model, as the sample size increases, the sample mean tends to be normally distributed around the population mean, and its standard deviation shrinks as n increases.

To use the central limit theorem the sample size must be independent and large enough so a decent amount of data can be formulated to utilize this statistical tool. When taking samples using the central limit theorem each one should represent a random sample from the population or follow the population distribution. The samples size should also be less than ten percent of the entire population.

Simple random sampling refers to any sampling method that consists of a population with N. objects, the sample consists of n objects and if all the possible samples of n objects are equally likely to occur, the sampling method is called simple random sampling. This method allows researchers to use methods to analyze sample results. Confidence intervals are created that deviate from the sample mean to help model their situation.…… [read more]

Math Anxiety Term Paper

Term Paper  |  3 pages (1,080 words)
Bibliography Sources: 1


Given that this is the case, it is shown that performance in the math class necessarily means that there is greater pressure on the student to be correct that there is in other subjects of academic discourse.

Extensive research has been conducted into the topic of math anxiety in both psychological and physiological avenues. Researchers assert that "Math anxiety can bring about widespread, intergenerational discomfort with the subject, which could lead to anything from fewer students pursuing math and science careers to less public interest in financial markets" (Sparks 2011,-page 1). This is a very interesting perspective. If these findings are accurate, then the anxiety an individual feels might not only be impacted by their own histories with math, but with the experience that their parents or guardians had as well. Thinking about the issue, this actually makes a lot of sense. When a child does not understand his or her homework, then the child will go to an adult who they are close to for help with the material. If that adult also does not understand the material or if they react negatively to the topic, then that will influence that child, providing the youngster with another example of a person who responds to mathematics in the same way. This can be damaging to the relationship between the child and math at a potentially exponential rate.

During the interview, the math instructor I talked with gave me their opinion about what might be the basis for math anxiety. They believe that math anxiety is largely caused by lack of confidence. If a person has been unsuccessful with math throughout their childhood, then they will more than likely have negative opinions about their abilities in the subject once they have reached adulthood. Building of self-confidence, the instructor asserts, will help with the anxiety we feel when we are dealing with mathematics. There are ways in which this confidence can be rebuilt, such as reviewing of knowledge that a person already has, building confidence that they do in fact have mathematical knowledge. Another way is by seeking out help from teachers and classmates. Admitting that you are struggling in math is the first step to gaining the knowledge you need to be successful in the subject.

Many people experience math anxiety and it seems to be a symptom of a greater truth. Anxiety will ultimately beget further examples of anxiety. When a person struggles with something and continues to deal with the issue without overcoming those early struggles, then the problem becomes exacerbated. Not liking math or not succeeding in math as a young person becomes something of a self-fulfilling prophecy. If a child fails at math, then he or she goes into the next examination or the next math class fully expecting that they will fail again. They become so consumed with this idea that it winds up coming to pass. I did not understand the all consuming nature of math anxiety, but it seems that it could strike anyone… [read more]

Behavior Science Research a Researcher Research Paper

Research Paper  |  2 pages (860 words)
Bibliography Sources: 1


An ordinal measure would ask the 130 individuals if they bought:

One to three vegetables each week

Three to five vegetables each week

More than five vegetables each week

A scale measure would ask the 130 individuals whether on a scale of one to five if they purchased (with 1 representing no vegetables and 5 representing many vegetables) how many vegetables they purchased each week.

7. In the fall of 2008, the U.S. stock market plummeted several times, which meant grave consequences for the world economy. A researcher might assess the economic effects this situation had by seeing how much money people saved in 2008. Those amounts could be compared to how much money people saved in more economically stable years. How might you calculate (or operationalize) economic implications at a national level?

The researcher would examine the amount of money people saved in 2008 and compare it to the amount saved in other years that were more economically stable and provide as a result a national average for the amount saved each year to be compared.

8. A researcher might be interested in evaluating how the physical and emotional distanceu a person had from Manhattan at the time of the 9/11 terrorist attacks relates to the accuracy for the event. Identify the independent variables and the dependent variable.

Physical distance and emotional distance are dependent variables and actual distance lived from Manhattan at the time of the 9/11 terrorist attacks is the independent variable.

9. Referencing Exercise 8, imagine that a physical distance is assessed as within 100 miles, or 100 miles or farther; also, imagine that emotional distance is assessed as knowing no one who was affected, knowing people who were affected but lived, and knowing someone who died in the events. How many levels do the independent variables have? The independent variable in this study has levels.

10. A study of effects of skin tone ( light, medium, and dark) on the severity of facial wrinkles in middle age might be of interest to cosmetic surgeons.

a. What is the independent variable in the study.

The independent variable in this study is the severity of facial wrinkles.

b. What is the dependent variable in the study.

The dependent variables in this study are light, medium, and dark skin.

c. How many levels does the independent variable have?

The independent variable has only one level and that being middle age.

11. Referring to Exercise 10, what might be the purpose of an outlier analysis in this case? What…… [read more]

Z Test in Psychology Term Paper

Term Paper  |  2 pages (432 words)
Bibliography Sources: 0


Entering the provided values gives: (75-70)/?[(12/?36) + (12/?36)] = 5/2 = 2.5 = z.

Step 4: Probability Calculation

What is being tested is whether the students' attitudes towards the mentally ill change as a result of viewing the film, thus it does not matter if the students' attitudes are better or worse, only if they changed significantly from students who did not view the film. Since the attitudes could be worse or better, this is a two-tailed test. If the Z score falls within 95% of control scores, then we have to retain the null hypothesis and reject the alternative hypothesis.

If the scores fall outside of 95% of control scores, within the 2.5% of the extreme tail of the normal distribution at either end (two-tailed), then we have to reject the null hypothesis and retain the alternative hypothesis. Based on the Z score table, a Z score of 1.96 or greater would be needed to obtain a probability value below the alpha of 0.05, two-tailed.

Step 5: Conclusions

The Z score obtained by comparing the two means was 2.5. For these two means to be significantly different, using an alpha of 0.05, the Z score would have had to be 1.96 or greater. Therefore, since 2.5 >…… [read more]

Guess and Check Essay

Essay  |  2 pages (615 words)
Bibliography Sources: 1


A popular problem solving strategy that an increasing number of students encounter before middle school is model drawing, sometimes taught as "Singapore Math." The Singapore method, named for the Asian nation in which it was developed, teaches students how to create visuals in a systematic way to assist in solving word problems. When students learn this method and have sufficient opportunities to practice, it can go a long way toward preparing them for the guess and check strategy. The Singapore method asks students to really examine the relationship between values in a problem and carefully consider the question being asked with respect to the solution.

There is a greater language base in today's mathematics programs. Prospective teachers should be reminded to discuss problem solving with their students, emphasizing the process and eschewing exclusive focus on "the right answer." Obviously, solving problems correctly is the goal. Students cannot get credit for wrong answers on standardized tests. More importantly, if students do not understand why they got a wrong answer, they have little hope of solving similar problems successfully in the future. Guess and check enables students to thoughtfully work through problems and gain understanding of mathematical relationships. It is a strategy that can help foster success. Success tends to beget further success; students who are able to solve problems build confidence in their ability to do so. They feel good about themselves and good about their mathematics classes. They learn that math does not have to be intimidating. There are thoughtful, logical ways to approach problem solving. It is a good lesson for mathematics as well as for other academic content areas.


Guerrero, S.M. (2010). The value of guess and check. Mathematics Teaching in the Middle

School…… [read more]

Nursing Research Analyzing Qualitative Data Essay

Essay  |  3 pages (842 words)
Bibliography Sources: 3


Statistics and Quantitative Analysis Design

Inferential statistics are based on the laws of probability and allow inferences to be drawn about a population based on a sampling of that population. Three applications for inferential statistics are: the sampling distribution of the mean; estimating parameters; testing hypotheses. The Sampling Distribution of the Mean employs an infinite number of samples from a selected population and theoretically distributes the means of those samples. Estimating Parameters consists of defining and establishing a framework for the target population from statistical samples (Polit & Beck, 2008, pp. 583-584). Finally, hypotheses are tested with objective criteria provided by data to infer whether the hypotheses are sufficiently supported by the evidence (Polit & Beck, 2008, p. 587).

Multivariate Statistics is an area of statistics concerned with the collection, analysis and interpretation of several statistical variables at once. While statistics may be artificially confined for convenience sake, health care actually involves complex relationships of variables for patients themselves, within a single health care institution, within a group of health care institutions, and within the entire health care system. Multivariate statistics observes and analyzes several of these variables at once using several types of tests for various purposes.

Multivariate Statistics analysis is integrated in quantitative analysis through a number of tests to compare a number of variables in complex relationships. Tests used in multivariate statistics include: multiple regression/correlation tests, used to understand the effects of at least 2 independent variables on one continuous dependent variable (Polit & Beck, 2008, p. 614); analysis of covariance (ANCOVA), which compares the means of at least two groups with a single central question (Polit & Beck, 2008, p. 624); multivariate analysis of covariance (MANCOVA), which involves controlling covariates -- or extraneous variables -- when the analysis involves at least two dependent variables (Polit & Beck, 2008, p. 627); discriminant function analysis, which involves using a known group to predict an unknown group with independent variables (Polit & Beck, 2008, p. 628); canonical correlation, which involves testing one or more relationships between two sets of variables (Polit & Beck, 2008, p. 638); logistic regression, which predicts the probability of an outcome based on an odds ratio (Polit & Beck, 2008, p. 640).

Inferential Statistics assists in… [read more]

Patient Perceptions of Maternal HIV Case Study

Case Study  |  3 pages (771 words)
Bibliography Sources: 4


For each patient in this study X and Y were known, but the researchers wanted to establish a straight line through the data that minimizes the Sum of the Squares of the vertical distances on a graph of the various points from the line that dissects the points.

Study bias. Participating patients self-selected to complete surveys, and not all survey respondents may have understood the terminology used in the survey in the same way.

Summary of Table 4. The recollection of patients regarding their physicians' practices are shown in Table 4, along with the responses of the physicians. Physician responses reflect their practice standards for recommending testing to women with exhibiting certain attributes or life situations, and also two specific questions that the physicians ask their patients. The responses of patients with regard to various questions are shown categorically for pregnant and non-pregnant women.

Chi-Square Test -- Race and Recall. Race was not found to be a strong predictor but a test did indicate that a patient's race is associated with her report that she had an HIV test. White, non-Hispanic and Asian women were significantly less likely to report having been tested for HIV than were African-American or Hispanic women. In this form, X2 (3), X2 represents the Chi-Square statistic and the "3" stands for degrees of freedom, while 17.3 is the Chi-Square value. The p value is a measure of how much evidence there is against the null hypothesis. In other words, it indicates the probability of getting a result as extreme as that one obtained. A small p value indicates that the null hypothesis can be rejected, with an understanding that there is still a possibility of making an error. P < 0.01 is the probability that the null hypothesis (of no differences between the groups) is true. Chi-Square is a non-parametric test, and though it can indicate if two groups are similar, it cannot tell the nature of the similarity.

Limiting factor. The researchers noted that physicians were used to distribute, collect, and return the patient surveys to the principle investigators and, as such, there was not a way to introduce random selection of the patient sample. Also, the patients in the study are associated with only 68 physicians so generalization may be…… [read more]

Structured Analysis of an Experimental Research Paper

Research Paper  |  3 pages (1,207 words)
Bibliography Sources: 1


When using sample variances to estimate the overall variance of a population, it is very important to avoid biasing the estimation by using (n-1) for the sample size in the variance formula, instead of the actual sample size n. Without this sample size correction, the computed sample variance would become an incorrect or biased estimate of the population variance.

In the 2004 study by Buller et al., the dispersion measures of variance and standard deviation were not of primary interest to the researchers in themselves, however the confidence intervals for their calculated results were paramount. Computing valid confidence intervals (CI) relies upon firstly the establishment that data are normally distributed, and secondly having available either the mean or standard deviation value to compute the CI. Therefore, the internal computations of mean and standard deviation from the large sample size were key to the results of this study. The range parameter was of incidental interest to the researchers, and was implied by the bounds of the categorical ranges they defined for each of their various tests. As noted by the researchers, "the large sample size allowed outcome assessment in patients with a broad range of body weights and renal function." 1

A standard normal distribution is a formal construct, defined as a normal distribution having a mean of zero (0), and a standard deviation of one (1). The area under the standard normal distribution curve represents the proportion or number of observations in the sample being analyzed, and their distance relative to the mean (represented by the center line of the graph), measured using the distance of each observation from the mean, measured using the positive or negative number of standard deviations of the observation relative to the mean. If sample is observed to have a normal distribution, this means that it will have characteristics similar to a standard normal distribution, and it therefore becomes possible to use familiar tools to compute the probabilities of selected outcomes, or proportions of value ranges.

In the 2004 study by Buller et al., the majority of the data gathered was categorical in nature, and were used to classify trial results as either significant or not significant for each of a large number of specific symptomatic tests. The essence of the comparisons relied upon the techniques of hypothesis testing and confidence intervals to validate whether the effect of each drug was significant for each of the symptomatic tests, and then computing the relative significance to compare performance of the drugs. The experimental result data were entered into a statistical analysis tool (SAS), which established the necessary preliminary criteria that the data conformed to a normal distribution, enabling the researches to employ the standard statistical tools.

The 2004 study by Buller et al. demonstrates the characteristics of a well-designed and appropriate statistical analysis. The researchers made a conscious effort to use very large sample sizes for each of the medication trials (n 1100), and they established a standard method of hypothesis testing with 95% confidence intervals… [read more]

Statistics in Social Work Research Paper

Research Paper  |  4 pages (1,453 words)
Bibliography Sources: 5


Back end testing of additional questions utilizing data from the sample, and test type, performs 'audit' of sorts on the research. While not typically necessary, as outlandish findings are invariably obvious to professionals whom have been working at practice, in training this applied method of assessment is perhaps the best way to learn how to form a strong hypothesis.

b.… [read more]

Theory on Plate Tectonics Term Paper

Term Paper  |  3 pages (1,158 words)
Bibliography Sources: 1+


Tragedy was to strike again, only a year after taking up this post, when in 1808 his father dies, and then in 1809, whilst in childbirth, his wife dies, and the second son, who she was giving birth to, was also to die soon after. However, is work does not appear to have suffered in the long-term, but the short-term saw him take time off of work and devote himself to his three children (Schaaf, 1964).

In 1810 he remarried, there were another three children, but this is generally though to have been a marriage of convenience rather than a love match (Schaaf, 1964).

Some of his major works included work on how to calculate the orbit of the planets. In his work Theoria Motus Corporum Coelestium he examined and discussed the use of differential equations, conic sections and the elliptic orbits, and then in the next volume of this work he then showed how the orbit of a planet could be estimated and then the estmate could be further refined (Rassias, 1991). By 1817 he had made is contributions to astronomy, and despite continuing observations he did not add more to the theoretical framework of astronomy (Schaaf, 1964).

Gauss did look to other subjects, publishing a total of one hundred and fifty papers over his career, he contributed to many other areas. Papers included Methodus nova integralium valores per approximationem inveniendi which was a practical essay that concerned the use of approximate integration, a discussion of statistical estimators in Bestimmung der Genauigkeit der Beobachtungen and geodesic problems in Theoria attractionis corporum sphaeroidicorum ellipticorum homogeneorum methodus nova tractate (Schaaf, 1964).

During the 1820's the work of Gauss appeared to start taking him more in the direction of geodesy. This may have started when, in 1818, he was requested to undertake a geodesic survey of Hanover, to link up to the Danish grid that was already in existence. He took total charge, and made the measurements during the day, and in the evenings he would reduce them to the calculations. It was during this survey, and as a result of the survey needs, that he invented the heliotrope (Rassias, 1991). Unfortunately, in the survey there were erroneous base lines used (Rassias, 1991).

Other work included may theories that were also discovered independently of Gauss by other mathematicians which have gained the recognition. For example, he had formed the ideas for non-Euclidean geometry, claiming to have discovered it fifty four years before Lobachevsky, but he still praised it. The fifty four-year framework may not be correct, but there are certainly some vague references to it in some of his work (Schaaf, 1964).

It was in 1832 when Gauss started to work with Weber, regarding terrestrial magnetism, many ideas were mentioned, and Dirichlet's principle was also included, but with a proof. They also proved there could only be two poles with Allgemeine Theorie (Schaaf, 1964).

The papers and theories have outlasted the name and reputation of their founder. However, the long-term impact of… [read more]

Sine, Cosine, and Tangent Term Paper

Term Paper  |  4 pages (1,135 words)
Bibliography Sources: 1+


Because of trigonometry, it was now possible to determine the approximate volume of a star simply by finding its diameter. When it was first discovered, people used simple right-angle trigonometry to find heights of mountains and tall buildings.

It was soon discovered that the entire wave spectrum could be described in terms of frequency and amplitude, and graphed by trigonometric functions, such as sine, cosine and tangent.

The Babylonian measure of 360° formed the study of chords. With this information, sine and cosine were loosely defined as =1. Another Greek mathematician, Menelaus, wrote six books on chords. Ptolemy subsequently created a complete chord table. His new discovery included a variety of different theorems such as a quad inscribed inside a circle has the property that the product of the diagonals = sum of products of opposite sides; the half angle theorem; the sum and difference formulae; the inverse trigonometry functions; and more sine and cosine rules.

How Sine, Cosine and Tangent are Used Today

Today, sine, cosine and tangent are still used for astronomy and for geography, as well as in navigation and mapmaking. The trio is also used in physics with the study of visible light and fluid motion. Engineers today use trigonometric functions for military engineers and conveyors.

Trigonometric functions are the functions of an angle. These functions are important when studying triangles and modeling periodic phenomena. The trigonometric functions may be accurately defined as ratios of two sides of a right triangle containing the angle, or as ratios of coordinates of points on the unit circle.

Of the six trigonometric functions, sine, cosine and tangent are the most important. Sine, cosine, and tangent are used when you know an angle and a length of one of the sides of a right triangle, and you want to know the length of another side. For these functions, the angle is in radians, not degrees

The sine of an angle is the ratio of the length of the opposite side to the length of the hypotenuse. (Moyer) The cosine of an angle is the ratio of the length of the adjacent side to the length of the hypotenuse. The tangent of an angle is the ratio of the length of the opposite side to the length of the adjacent side.

Without sine, cosine and tangent, the mathematical tables on our computer screens would only show blank pages, and scientific calculators would not react to punching in numbers. Draftsmen would make serious errors when designing buildings, geologists would have inaccuracies of measurement, and so on.

Trigonometry has even been used in analyzing motor vehicle collisions. (Kaye) Geometry is used to determine curve radii for use in circular motion calculations while sine, cosine and tangent are used in momentum, vaults and road grade determinations.

Trigonometric functions were originally developed for astronomy and geography, but scientists are now using them for other purposes, too. Besides other fields of mathematics, trigonometry is used in physics, engineering, and chemistry.

Within mathematics, trigonometry is used primarily… [read more]

Low Math Term Paper

Term Paper  |  8 pages (2,870 words)
Bibliography Sources: 1+


In the book, Ma provides an example of a Chinese teacher who has this profound understanding.

This teacher prepares for their lesson by considering what they will teach and what it means. They link the lesson that will be taught to the underlying concepts they want the students to learn, to the other concepts the information should link to, and… [read more]

Proof, a Nova Episode Aired Term Paper

Term Paper  |  3 pages (1,088 words)
Bibliography Sources: 1


This is another way of looking at solving complex problems. The show made the problem seem all encompassing (which it was to Wiles), and used a variety of experts to explain just what Wiles was attempting to prove, and why it was so important to the mathematical community. They took a topic which could have been boring and nearly incomprehensible, and made it interesting enough to keep the viewer watching. In fact, NOVA managed to get the viewer behind Wiles, and by the end of the show, when it seemed like he might not prove his theory, it was almost as if I was rooting for him to continue and not give up. To end the program, NOVA said, "Andrew Wiles is probably one of the few people on earth who had the audacity to dream that you could actually go and prove this conjecture" (NOVA). Therefore, this story is as much about dreams and goals as it is about pursuing something complex throughout your life to fruition. Andrew Wiles dared to dream, and in the end, his most complex "proof" may have been that sometimes dreams come true - with hard work, determination, and thinking "outside the box," - or in this case, the theorem.

This video is also quite important in what it shows about how people learn to do mathematics, and it was somewhat how I learned to do mathematics. Wiles broke down an extremely complex problem into bits and pieces, but he also had to look at it in unaccepted and untried ways. This is often how new truths are learned in any area. He also said that he suddenly had some kind of understanding that had not been there before. "I had this incredible revelation. [...] It was the most -- the most important moment of my working life. It was so indescribably beautiful; it was so simple and so elegant, and I just stared in disbelief for twenty minutes" (NOVA). While I have not attempted to solve complex problems such as Wiles', I had a hard time "getting" algebra at first, and it seemed like it took me years and years of study to understand even the most simple equation. Then suddenly, one day in class, I looked at an equation, and it suddenly just "made sense," and I could see the solution without struggle. I finally "got" it, and I know just how Wiles felt when the solution suddenly came to him. It was an incredible feeling, and once I had "gotten" it, not only was mathematics simpler, it was not so frightening or frustrating.

The Proof" is an elegant look at a complex subject, and it not only made mathematics more human, it made it clear how the best problem solving approach is one that takes a complex problem, breaks it down into more solvable areas, and then looks at every angle of the problem to find a solution. That solution might be, in the end, simple, but it needed alternate thinking… [read more]

Pascal's Triangle Who Really Invented Term Paper

Term Paper  |  4 pages (1,265 words)
Bibliography Sources: 1+


In fact, the understanding of probabilities the triangle helped mathematicians understand has led to the development of "average gain" or "probable gain" formulas that are still used extensively in business and industry (Borel, 1963, p. 20).

The basic formula for the triangle is simple, as one expert notes.

If we assume a fictitious row of noughts prolonging each of these lines to right and left, it is possible to lay down the following rule: each number in any one of these lines is equal to the sum of whatever number lies immediately above it in the preceding line, and whatever number lies immediately to the left of that number. Thus the third number in the fifth line is 10 = 6 + 4; the fourth number in this same line is 10 = 4 + 6; the fifth number is 5 = 1 + 4 (Borel, 1963, p. 18).

There is one problem with Pascal's formula, however. Unfortunately, as the numbers increase, the triangle takes much longer to solve, and the formula becomes ungainly. This created problems with the formula initially, but mathematicians have learned to cope with the formula and have created alternates that let them work with the numbers more effectively, as this expert notes. "Mathematicians have established certain formulas that allow them to work out the numbers which appear in Pascal's Triangle, as well as the sums of whole rows of these numbers included between fixed limits" (Borel, 1963, p. 18). Thus, Pascal's triangular theory was not perfect, but the formula has lasted through time, been improved, and still makes the study of probabilities cognitive.

However, this simple formula has made quite a difference in mathematics circles for centuries for a number of reasons. First, his treatise on these binomial coefficients later helped contribute to Sir Isaac Newton's eventual invention of the general binomial theorem for fractional and negative powers. In addition, Pascal carried on a long correspondence with Pierre de Fermat, and in 1654, this correspondence helped contribute to the development of the foundation of the theory of probability, which is one of our most important mathematical developments even today.

Interestingly enough, Pascal devoted the last eight years of his short life to philosophy and religion, and gave up his studies in the sciences and mathematics. One must wonder what he could have accomplished had he continued his studies, and indeed, what improvements he could have made to his triangle had he given it even more time and effort. His discoveries and inventions live on today, along with his name, as one of the greatest minds of all time, and he contributed greatly to our lives today, from a clearer understanding of probabilities to measuring the weather, dispensing medications, and ultimately computing our calculations quickly and efficiently.

In conclusion, Blaise Pascal died in 1662 at the age of thirty-nine - two years before the significance of his triangle would be known to those outside his academic circle, and the final formula would be published. Today, mathematicians… [read more]

Mathematician - Maria Gaetana Agnesi Term Paper

Term Paper  |  2 pages (587 words)
Bibliography Sources: 1+


She had written 2 volumes of mathematical books, the Institutioni analytiche ad uso della gioventu italiana (Analytical Institutions), that covers elementary and advanced mathematics which she started to develop when she was teaching mathematics to her younger brothers. Her books aim to present a complete lecture of algebra and mathematical analysis.

Maria Gaetana Agnesi was well-known for her "The Witch of Agnesi," which, actually, should be called "The Curve of Agnesi." The Italian term "versiera," or plane curve, was mistakenly translated by John Colson into the word "witch" (Parente, 2003). Thus, "The Curve of Agnesi" was also known as "The Witch of Agnesi." Elif Unlu describes "The Witch of Agnesi" by stating the following.

Agnesi wrote the equation of this curve in the form y = a*sqrt (a*x-x*x)/x because she considered the x-axis to be the vertical axis and the y-axis to be the horizontal axis [Kennedy]. Reference frames today use x horizontal and y vertical, so the modern form of the curve is given by the Cartesian equation y*x^2=a^2(a-y) or y = a^3/(x^2 + a^2). It is a versed sine curve, originally studied by Fermat.

When Agnesi first wrote her 2 volumes of Analytical Institutions, she used her genius in mathematics to teach her younger brothers, and the young Italians as well. Her prowess in mathematics was shared when, after the success of her book, she became a professor of mathematics in the University of Bologna.


Crowley, Paul. Maria Gaetana Agnesi.

New Advent. 08 Dec 2003. http://www.newadvent.org/cathen/01214b.htm

Unlu, Elif. Maria Gaetana Agnesi.

1995. Agnes Scott College. 08 Dec 2003. http://www.agnesscott.edu/lriddle/women/agnesi.htm

Parente, Anthony. I Wrote the First Surviving Mathematical Work by a Woman.

2003. ITALIANSRUS.com. 08 Dec 2003. http://www.italiansrus.com/articles/whoami5.htm… [read more]

Statistical Analysis Reported in Two Term Paper

Term Paper  |  12 pages (3,282 words)
Bibliography Sources: 1+


First, no such mention was ever made in the beginning of the study with respect to gender differences. Second, logistic regression analysis and/or techniques have no earthly association with differences. Had the authors wanted to determine whether or not differences occurred they should have employed the proper descriptive tool "t" test or ANOVA." Again, this was not the case. Additionally… [read more]

Different Components of Statistical Testing Essay

Essay  |  2 pages (830 words)
Bibliography Sources: 2


Statistics in Research: Different Factors to Consider

Statistics in research take two primary forms: that of inferential vs. descriptive statistics. Descriptive statistics, as the name suggests, merely seeks to describe a particular phenomenon as it exists numerically. Examples of descriptive statistics include determining the mean, median, mode or midrange of a particular set of figures or establishing a correlation between two sets of data (Taylor 2015). Presenting statistics in a graph is also considered descriptive in nature (Taylor 2015). Inferential statistics, in contrast, are used when it is impossible to assess data about an entire population group. "It is typically impossible or infeasible to examine each member of the population individually. So we choose a representative subset of the population, called a sample" (Taylor 2015). A good example of this is polling after an election: since it is impossible to accumulate data about all of the voters, a demographically representative group of voters may be polled after they vote. Multiple measurements are often taken in the case of inferential statistics to ensure greater accuracy.

Another distinction in regards to statistical findings is the question of statistical significance. All statistics contain some margin of error. Statistical significance means that given the sample size and the probability of error, the computed difference is still likely to be true. For example, "a difference of 3% (58% for women minus 55% for men) can be statistically significant if the sample size is big enough" ("Statistical vs. practical significance," 2015). However, merely because a sampling is statistically significant does not necessarily mean it is practically significant. Practical significance means that the finding is notable enough that it will have a material impact upon decision-making in the real world. Factors may include cost, feasibility, and the extent to which the intervention would have a meaningful and demonstrable effect on the quality of participants' lives, given the size of the effect ("Statistical vs. practical significance," 2015).

There are two major types of errors in statistical analysis: Type I and Type II (Hopkins 2013). Type I is when a study is overly sensitive and over-estimates the magnitude of the effect of the study which often occurs without an appropriate use of a control; Type II is when the effect of the intervention is underestimated (Hopkins 2013). A Type II error often occurs when too small a sample size is selected (Hopkins 2013). Another type of error is that of bias, either unintentional or intentional upon the part of the study design (Hopkins 2013).…… [read more]

Germane Quality of Mathematics Research Paper

Research Paper  |  2 pages (643 words)
Bibliography Sources: 1+


Mathematical puzzles are a longstanding facet of mathematics that have numerous applications in the world today. In this respect, there is a significant amount of fascinating information regarding this element of mathematics. This document will concentrate on several different facets of mathematical puzzles, beginning with their history -- which extends as far back as nearly the history of mathematics. It will also detail some of the actual mathematical principles at work in examples of some mathematical puzzles. Additionally, the paper will provide real-world examples of how mathematical puzzles have shaped society at various points in time. Cumulatively, these three points will attest to the immense importance ascribed to mathematical puzzles in the past and present.

In researching the history of mathematical puzzles, it is nearly impossible to distinguish that history from the history of mathematics itself. Some sources date the history of these puzzles from at least 1800 BCE. And their deployment by the Egyptians (Kent, N.D.). Interestingly enough, there are numerous principles of mathematics that are directly descended from the Egyptians themselves, which helps to buttress the viewpoint that math-based puzzles coincided with the history of mathematics in general. However, there is evidence of Egyptian mathematics puzzles dating back 3,600 years (1650 B.C.) and which are strikingly similar to the riddle about the man going to St. Ives with seven wives (NY TImes). This Egyptian text was preserved on papyrus, which presages the notion of using textbooks for math puzzles. These puzzles have descended into modernity orally (such as in chants and riddles) and through the formal implementation of textbooks. Midway through the 20th century non-cooperative math games provided the basis of John Nash's game theory.

The mathematics of math games is actually fairly diverse, and largely hinges upon which particular math game one is playing. Still, there are some general principles that apply to most of these games. For instance, cardinality is typically important in math games. Cardinality is…… [read more]

Digital Audio Broadcasting System Case Study

Case Study  |  5 pages (1,283 words)
Bibliography Sources: 5


Analogue and Digital Converter

This is an electronic device that helps in the conversion of continuous signals to discrete or isolated digital numbers. When an analogue voltage or current is fed into the device as an input, it converts it into a digital number relative to the voltage or current magnitude. There are a number of terminologies related to ADC, which include resolution, accuracy, response type, sampling rate, aliasing, dither, oversampling, relative speed and precision, and the sliding scale principle, however, just a few of them will be considered in this study.

The resolution of an ADC refers to an indication of the isolated values that in can generate over the range of analogue values. Since the electronic storage of these values is in binary form the resolution is normally expressed in bits and the available discrete values being a power of two. For instance, an ADC whose resolution is 6 bits has the capability of encoding an analog input to one in 64 varied levels, given that 26=64. It is also possible to define resolution electrically and give the expression in volts. The voltage resolution of an ADC is found by dividing the overall voltage measurement range by the number of isolated intervals. The formula is written as:


Q= resolution in volts/step i.e. (volts/output codes-1)

EFSR= full scale voltage range which is given by VRefHi -- VRefLow

M= ADC's resolution in bits

N= number of intervals= 2M -- 1 (Knoll 1989)

Consider an example given by Knoll (1989) where the Full scale measurement range is 0 to 7 volts, the ADC resolution will be 3 bits which means 8 quantization level sie. 23. When this is given in terms of ADC voltage resolution it equals 7V/7 steps which give 1V/step.

The ADC is not exempted from errors that are encountered by other instruments and has errors that have a number of sources which brings about the question of accuracy. These errors are categorized as quantization error, non-linearity error and aperture error. Quantization error is caused by the finite resolution of the ADC and cannot be avoided in any ADC while non-linearity error occurs due to the physical imperfections of the ADC which leads to a deviation between the output and the input from a linear function. The third error is caused by a clock jitter and is usually exposed when digitizing a signal that is time variant. The non-linearity error can be toned down by calibration or eve averted by testing. In most ADCs the range of input values that map to every output value are linearly related to that output value and are referred to as linear ADCs.

The speed and precision of an ADC varies depending on the type of the ADC with the Wilkinson ADCs being considered the best since they exhibit the best differential non-linearity. ADCs are usually represented using a symbol; the conventional electrical symbol used is as below (schematic).

Demodulator (Band pass filter)

A band pass filter is a device that helps… [read more]

History and Present Day Applications of Logarithms Essay

Essay  |  3 pages (877 words)
Bibliography Sources: 4



History and Modern Applications of Logarithms

The first time a publication contained a mention of logarithms, their method of derivation, and a table of logarithms was in 1614 with the publication of Mirifici Logarithmorum Canonis Descriptio by the Scottish nobleman John Napier (ST 2005). Napeir's book did not describe or list logarithms as they are known today, but rather the logarithms contained in this work were meant to replace the trigonometric multiplication functions needed in astronomy and other branches of science with a simplified addition from other established figures (Capmbell-Kelly 2003). Henry Briggs, a professor of geometry at Oxford, was very inspired by Napier's work, developing his own ideas based on those in the Descriptio and meeting with Napier to discuss developments and recalibrations of the logarithms contained in Napier's original and pioneering work on the subject (Campbell-Kelly 2003).

Briggs would go on to publish his own table of logarithms for common numbers (as opposed to the logarithms for sines contained in Napier's column); Briggs' tables showed the logarithms for every whole number below 1000 carried out to eight decimal places, providing a very useful tool to the navigators, astronomers, and mathematicians working in this day centuries before the advent of computer and calculators (ST 2005). This was published in 1617, the year of Napier's death, and by 1624 Briggs had expanded his tables to include all integers from 1 to 20,000 and from 90,000 to 100,000, carried out to fourteen decimal points (ST 2005). These tables led to a massive increase in use of logarithms in certain fields where their usefulness was already established, and this subsequently led to expansions in the applications for logarithms generally (Campbell-Kelly 2003).

There are many different modern applications for logarithms that have nothing to do with the distances of navigation and astronomy -- or any physical measurements at all -- proving that logarithms are indeed an incredibly useful mathematic tool on a scale that Napier himself did not really envision. Anything that involves exponential growth can most easily and accurately be calculated using logarithms; studies of population growth, nuclear reactions, and any other scientific inquiries depend on the use of logarithms to develop real and usable data and projections (Tom 2002). Logarithmic scales also exist in electrical engineering, as a means for testing for signal decay, and there are many bodily functions and reactions that are logarithmic in nature, leading to many other biological and medical uses and needs for an understanding and utilization of logarithms (Tom 2002).

Another common use for logarithms is in the world of banking, specifically in the calculation of interest and periods of repayment on…… [read more]

Group Will Behave, We Make a Hypothesis Term Paper

Term Paper  |  2 pages (580 words)
Bibliography Sources: 3


¶ … group will behave, we make a hypothesis, a testable proposition (or set of propositions) that are believed to be true which seeks to explain the occurrence of some specified group of phenomena, (Random House, 2010). For example, let's say that the widget making department is producing fewer widgets per hour this year than last year despite the fact that the number of employees has remained constant. You hypothesize that their decreased productivity is because of low morale but how do you know if your hypothesis is correct?

Hypothesis testing is a statistical way of testing the validity of a hypothesis. In business and the social sciences, hypothesis testing allows us to generalize about a population based on sample information by using methods that allow the research to separate the effects of systemic variation of a variable from mere chance effects (Sarich, 2010). This is particularly important in business because we often cannot isolate or control for phenomena in a laboratory -- type setting the way a physicist or biologist can (Sarich, 2010).

A 1999 study on the automobile insurance industry appearing in the Journal of Economics and Business illustrates the real world applicability of hypothesis testing. The study entitled Modeling Market Shares of the Leading Personal Automobile Insurance Companies, looks to identify the advantages that give one firm more market share over another. The author uses several hypothesis tests to analyze the market share of the leading personal auto liability insurers from 1980 to 1994, discovering in the process that automation and advertising are significant sources of competitive advantage, and that price-cutting, reductions in commission rates and concentration in the private passenger line of insurance are not - useful information in helping an insurer to decide where to…… [read more]

Measurement Scales That Are Used in Collecting Essay

Essay  |  2 pages (716 words)
Bibliography Sources: 0


¶ … measurement scales that are used in collecting and organizing research data. The scales discussed in this text are: nominal; ordinal' interval' and ratio. The differences between the measurement scales determine how data can be manipulated and studied and which kinds of conclusions can be reached using the data.

Nominal scales are the simplest of the measuring devices. These scales only measure the classification of items. Either the data does or does not fit the scheme. An example of this is gender. When you ask for a respondent's gender on a survey form, this is a nominal question. There are only two answers, and the only arithmetic you can use on the data is to count how many of each kind are in each classification.

Ordinal scales include the classification involved in nominal scales, but they can also measure preferential order. In addition to classifying popcorn, for example, as air-popped or oil-popped, the respondent can also rank the different flavors of popcorn in order of best to worst. The limitation of using ordinal scales is that there is no uniform interval between the rankings. The difference between "highest" and "high" isn't necessarily the same in the respondent's mind between "lowest" and "low."

Interval scales take care of the problem with ordinal scales, as they allow for classification, order and an equality of difference between rankings. The difference between 1 and 2 is the same as the difference between 4 and 5, for example. The data is relatively symmetrical and researchers can do operations like plotting a standard deviation and finding the mean. Intervals can be used to find averages, which is not possible with ordinal rankings.

Ratio scales are the most complicated and can be the most useful. They include all the qualities of nominal, ordinal, and interval scales, plus the provision for an absolute zero. Measurements like distance, volume, length, money, population counts, etc. are all measured on a scale that begins at zero. Ratios are most useful with data that can be definitively measured and they are more useful in the hard sciences than behavioral sciences.

Choosing a measurement scale inherently involves looking at white…… [read more]

Early Reading or Fluency and Word Identification Essay

Essay  |  2 pages (661 words)
Bibliography Sources: 2


Jeffery Case Study

Background for Jeffery

Jeffery has problem understanding algebraic concepts such as polynomials and factoring. Jeffery doesn't require any special aids and his learning capabilities are normal for his age.

Ability to solve polynomials and factorization with the help of Mnemonic device such as FOIL

develop the ability to understand order in which polynomials are solved

Become familiar with complex algebraic terms with the help of vocabulary teaching.

Supporting information for set goals

John Steele (2003) explains that children having difficulty with mathematical concepts can be helped with the use of mnemonic devices because "Mnemonics are useful for memorizing rules, steps, and procedures." (p. 624)

Vocabulary teaching is also critical for developing good understanding of algebraic concepts. This has been proven by Richard Drake's research of 1940 and was later supported by studied carried out by others. Myszczak found that vocabulary learning is significant because it teaches students the ability to focus on the question and what is being asked as "students often search the problem for numbers rather than attempting to comprehend what is truly being asked in the problem" (p. 28)

Schoenberger and Liming (2001) focused on the teaching of a specific list of words that could help students learn mathematical concepts. They believed that "Students should be able to use and understand vocabulary in order to think about and discuss mathematical situations" (p. 27).

Apart from teaching vocabulary, it was also found that simply communicating with students while teaching mathematical concepts could help facilitate better understanding of the concepts. In other words, students could communicate the problem and seek the solution by talking to their teachers. "Language is a major medium of teaching and learning mathematics; we serve students well when we support them in learning mathematical language with meaning and fluency" (Rubenstein, 2007, p. 206).


First Jeffery needs to understand that any two sets in parenthesis without any sign in the middle means multiplication. Once he understands that he needs to understand the…… [read more]

Digital Audio Book Report

Book Report  |  2 pages (679 words)
Bibliography Sources: 1


Digital Audio

Over the last several years, digital audio has been continually innovating the way that people listen to and write various forms of music. This is because, how sound waves travel can be challenging, as the different instruments will reflect: the pitch, tone, bass and timber in varying degrees. When trying to record these sounds this can be challenging, with the various waves reflecting certain amounts of frequency. The problem begins with trying to reflect these various tones in real life, as different analogue and acoustic devices are only focused on the continuous flow of the music. When this is replayed, the various sounds will often be distorted, because they cannot reflect the actual extremes of these reverberations in real life. In the last few years, digital audio has become an increasingly popular solution in addressing these different challenges. With this form of recording focused on the specific mathematical values that the music will represent. As various mathematical formulas using decimals will be used to more accurately reflect, the different sounds that are being recorded. This is important, because it shows how digital recordings are working to accurately reflect the various sounds that are recorded. To fully understand how this new technology is able to accurately, capture the overall collection of different sounds requires: comparing these forms of technology with one another. Once this takes place, it will provide the greatest insights, as to the how digital audio technology is improving the way everyone listens to and records music. (Pohlmann, n.d. )

Digital Recording vs. Analogue and Acoustic Recording

The big difference between digital recording and analogue / acoustic recording is the way the various sounds are represented in a binary number. These are the different mathematical calculations that are used to convert the various sounds being recorded, into actual resonance on the recording device. What makes digital audio more accurate is the way that the binary code is represented. Where, it is more concerned about the actual decimals of the numbers vs. The underlying wave. In mathematics,…… [read more]

Spiritual Principle: So Teach Us to Number Term Paper

Term Paper  |  2 pages (571 words)
Bibliography Sources: 3


Spiritual Principle:

So teach us to number our days, that we may apply our hearts unto wisdom. (KJV Psalm 90:12)

The school year consists of two semesters. Within each semester are three units. During unit one of the first semester which is four weeks, students will learn about functions. During the second unit of the first semester which is five weeks long, students will learn about algebra investigations. During the third unit of the first semester which is seven weeks long, students will learn the geometry gallery. The second semester of the school year also consists of three units. During the fourth unit which is six weeks in duration, the chance of winning will be covered. The fifth unit of the second semester is also six weeks long and students will learn algebra in context. The objective of the sixth and last unit is for students to learn coordinate geometry. The last unit is four weeks long.

Suggested Activities and Experiences:

1. To learn functions, students will spend lab time exploring the National Library of Virtual Manipulatives (http://nlvm.usu.edu/en/nav/grade_g_4.html). By clicking on the functions button, students can play the game that allows them to drag an input number into a machine which then gives the output. Based on the pattern of inputs and outputs, students can figure out what the remaining inputs and outputs will be based on the pattern established by playing the game. This game will allow the students to learn the basic concepts of function in order to move on to other more challenging concepts.

2. To learn algebra investigations, students will download the Mathematics I Frameworks: Student Edition document (https://www.georgiastandards.org/.../9-12%20Math%20I%20Student%20Edition%20Unit%202%20Algebra%20In...). On page seven (7) of this document is an exercise called the…… [read more]

Geometry of Design Elam, Kimberly. ) Book Review

Book Review  |  4 pages (1,184 words)
Bibliography Sources: 1


Geometry of Design

Elam, Kimberly. (2001). The Geometry of Design. New York: Princeton Architectural Press.

The Geometry of Design is not a book about nature, physics, or even of design. Instead, it is a relatively short and simple overview of the role of geometry within nature -- whether it is the analysis after the fact from a human perspective or the way nature works that we find pleasant, the book explains the prevalence of the Golden Mean and other geometrical thermos within nature's design.

Proportion in Man and Nature - Proportion is all around us, it is in everything designed within the sphere of nature; a leaf, a shell, a flower. And these proportions are instinctively pleasurable for us, which is likely the reason why much of design and architecture is based on the very same principles of ratio, proportion, and structure. The basis for this design structure is the Golden Ratio, or 1:1.618. Since the Renaissance, this is the proportion that has been used by artists and architects to proportion their works for mass appeal. Fascinating, however, is just how many objects in nature follow this exact proportion.

Talking Points-

Nature is typically proportionate in design, showing smaller objects to be part of a greater whole.

Even animals show this same proportion, a fish for example, when split into individual rectangles, retains the 1:1.618 ratio.

Similarly, the human body in classical drawing (Leonardo, the Greeks, etc.) form similar ratios.

Preferred facial proportions also follow the ratio; faces that do not are often considered less pleasing.

Chapter 2 -- Architectural Proportions - Through a series of dynamic rectangles, humans have developed their entire building system off this ratio. The harmony of space, e.g. windows, doors, arches, etc., especially in public buildings (governmental locations, arenas, religious buildings), all serves both to inspire and make one comfortable.

Talking Points

Ancient architects were very concerned with the way a building was shaped, laid-out, and built. It had to conform to strict proportions in order to be appropriate from a symboligist viewpoint to its function.

Each architectural discovery and innovation resulted in a reestablishment of the principles of appropriate design (e.g. circular stained glass windows in cathedrals, etc.)

This tradition remained in effect for several centuries; progressing through styles like the Baroque, Gothic, Romantic, etc.

In 1931, a French architect, Le Corbusier, expanded this into a more complex merging of mathematics and geometry -- regulating lines. He believed "with regulating lines, you make God a recipe."

In a way, this invigorated the reemphasis on proportion and meaning to form a more 20-21st century way of applying the Golden ratio to modern construction and design.

Chapter 3- Golden Section- the Golden section of any rectangle is a ratio of the Divine Proportion. The Divine Proportion is derived from the division of a line segment into two segments such that the ratio of the whole segment is the same, as 1: 1.61803. This ratio can be found in any portion or sub-portion of a triangle, rectangle,… [read more]

Butts, R.E. ). Galileo. In W.H. Newton-Smith Annotated Bibliography

Annotated Bibliography  |  3 pages (864 words)
Bibliography Sources: 0


Butts, R.E. (2001). Galileo. In W.H. Newton-Smith (Author), a companion to the philosophy of science (pp. 149-152). Malden, MA: Blackwell Pub.

This excerpt from a reference work is a biographical sketch of Galileo, 17th

century Italian scientist. It outlines five crucial achievements that he made, a few of which include his divergence from Aristotelian theories of science, his advocacy of the real-world applications of mathematics, and his use of experimentation. The author outlines the origins of Galileo's scientific research, particularly in cosmology and his work with the telescope. His work also centered around making geometry less abstract. He pointed out how geometric laws worked in concert with both the natural and mechanical worlds. The most compelling point the author makes, which has application even today to teachers and researchers, is about Galileo's rejection of the dominant philosophies of the era. Though he created controversy, opposition drove his science to advance, leading ultimately to success.

Frodeman, R., & Parker, J. (2009). Intellectual merit and broader impact: The National

Science Foundation's broader impacts criterion and the question of peer review.

Social Epistemology, 23(3-4), 337-345.

The article examines how scientific discovery dictates social values, and how the philosophy of science has evolved. Science has historically been funded with an eye to how it will benefit society. The specific focus of the piece is on the NSF

peer review process and the change of criteria used to allocate funding that occurred in 1997. This change created two criteria: intellectual merit and broader impacts. The broader impacts criteria include education/outreach and an effort to broaden diversity. But the question remains: How is benefit to society determined and measured? The article also raises the question of whether these two criteria categories should be merged, and if intellectual merit is actually a subset of broader impact. This is brought up to point of the potential pitfalls of peer review and to call for a closer examination of its procedures. This is relevant to math education in that research into education practice should be viewed with a mind toward its application in the lives of students and its greater impact on society.

Lesser, L.M. (2000). Reunion of broken parts: Experiencing diversity in algebra.

Mathematics Teacher, 93(1), 62-67.

The author employs a central metaphor of the meaning of algebra, that being the reunion of broken parts. He compares this to the way students interact with algebra, given that they can feel disconnected from it and it is the job of teachers to provide real world applications that will make connections for the students. He takes aim particularly at the need to…… [read more]

Educational Standards Thesis

Thesis  |  2 pages (431 words)
Style: APA  |  Bibliography Sources: 1


Communicating No Child Left Behind Daily Standards

Grade-Appropriate Restatement of New York State 2nd Grade Math Standards

Original Statement of 2.PS.1:

"Explore, examine, and make observations about a social problem or mathematical situation"


of 2.PS.1:

We're going to look at the kinds of problems people have and the kinds of problems that mathematics can help us solve.

Original Statement of 2.PS.2:

"Interpret information correctly, identify the problem, and generate possible solutions"

Restatement of 2.PS.2:

We're going to learn how to understand what kinds of problems we have to solve and how we can use mathematics to do that.

Original Statement of 2.PS.4:

"Formulate problems and solutions from everyday situations (e.g., counting the number of children in the class, using the calendar to teach counting)"

Restatement of 2.PS.4:

Some of the problems we're going to look at are the kinds of things that people need to figure out all the time, like how to count how many students are in a big room without counting on our fingers and toes.

4. Original Statement of 2.RP.3:

"Investigate the use of knowledgeable guessing as a mathematical tool"

4. Restatement of 2.RP.3:

We're going to learn what an "educated guess" is, how that is different from regular guessing, and how to use educated guesses in mathematics.

5. Original…… [read more]

Operational Definitions Essay

Essay  |  8 pages (2,354 words)
Bibliography Sources: 5


¶ … Operational Definitions of Each of These

It states clearly the expected relationship between the variables

It states the nature of the relationship

It states the direction of the relationship

It implies that the predicted relationship can be tested empirically

It is grounded in theory.

The article What is a null hypothesis? explains what distinguishes a null hypothesis from… [read more]

Teaching Calculus to Young Children Thesis

Thesis  |  2 pages (638 words)
Style: APA  |  Bibliography Sources: 2


Mamikon's Approach To Teaching Calculus

Mamikon's A. Mnatsakanian, often along with his colleague Tom M. Apostol, has published many papers detailing new instructional methods for explaining otherwise complex concepts in the realm of calculus, as well as new ways of understanding these concepts. His emphasis is on a visual understanding of calculus, which is more easily observed and intuited by students -- and at a younger age, it seems increasingly evident -- than traditional textual and purely mathematic explanations and understandings. For years, a website has been available with several puzzles and games that help to visually express many of the mathematical measurements and principles of calculus. Several brief examples of Mamikon's teaching style make it clear how the principles of calculus build on lower mathematic understanding, and are in fact easily understood themselves.

Measuring the area of a curved space is essential for many applications of calculus, yet can be one of the more difficult among the basic principles and practices of the average calculus student. Mamikon's illustration of the curving bicycle, and subsequent related illustrations, show quickly and easily how the area described by such a curve is the same as the area -- or partial area -- of a circle (CalTech). The preceding sentence is proof of how difficult such concepts can be to clearly and efficiently explain, but accompanied by Mamikon's illustrations the principle is instantly observed and far more easily remembered and recognized.

A more thorough and elegant explanation of the same concept is provided on Mamikon;s paper (with Apostol) entitled "Subtangents -- An Aid to Visual Calculus." Again, Mamikon starts with a visual explanation of the principle, but goes on to detail this principles work in calculus (Apostol & Mamikon 2002). Thus, his method of teaching calculus visually creates at least a rudimentary understanding of a principle or practice before any theorem or even a simple equation is introduced. This is the opposite…… [read more]

Errors Type I/Type II Errors Statistical Analysis Thesis

Thesis  |  2 pages (659 words)
Style: APA  |  Bibliography Sources: 3


¶ … Errors

Type I/Type II Errors

Statistical analysis can lead to many different errors of many different types, both in the gathering of data and the manipulation of it to produce results in a practical and relevant manner. Often, errors arise as a result of the complex mathematical manipulations that must occur in order to make useful sense of data. These mathematical errors can compound and lead to wildly incorrect interpretations of data, producing results that cannot be trusted or validly used. Other errors can occur in the interpretive phase of data analysis; these can often be far more egregious, and at the same time they are often more difficult to catch. Errors made in that actual mathematic manipulation of data often delivers results that -- for obvious reasons -- simply do not make sense. Interpretive errors, however, are more difficult to catch almost by definition. The data itself may be entirely sound, and therefore the results are more likely to be trusted, but an error in interpretation can still cause the data to be incorrectly applied.

There are two rather basic and fairly straightforward errors, known as Type I and Type II errors, that are commonly made in data analysis. Both refer to a basic mistake regarding the status quo from which the analysis is meant to measure change. This status quo is called the null hypothesis, the idea/belief that there was no change in the phenomenon measured during the test. When there is no change in the situation or phenomenon, the null hypothesis is said to be true (that is, nothing happened). If a change in the situation/phenomenon has in fact occurred, then the null hypothesis (the idea that nothing has happened) is quite clearly false. A Type I error occurs when there is a false positive -- when the data analysis suggests a change has occurred, when in fact there has been no change. Thus, in a Type I error the null hypothesis is true…… [read more]

Fractal, in Its Completed and Perhaps Complex Essay

Essay  |  5 pages (1,470 words)
Style: MLA  |  Bibliography Sources: 4


Fractal, in its completed and perhaps complex form, resembles a fracture or a series of complicated and uncoordinated breaks. Indeed, the word can trace its origins to the Latin fractus, which means fractured or broken. A fractal, as is mathematically understood, is the end product (or a product that is in the process of completion in a recursive manner) of… [read more]

Pi Is Interwoven With the History Essay

Essay  |  4 pages (1,333 words)
Style: MLA  |  Bibliography Sources: 8


¶ … pi is interwoven with the history of humanity. Remarkably, "By 2000 BC, men had grasped the significance of the constant that is today denoted by pi, and…had found a rough approximation of its value," (Beckmann 9). Math historians assume that the study of pi began as an analysis of magnitude: that circles remained circles no matter how big or small. Beckmann suggests that early humans contemplated "the peculiarly regular shape of the circle," which was visible everywhere in nature in "its infinite symmetry," (9). Pi remains a mystery in spite of thousands of years of scholarship and investigation. The number is both irrational (it cannot be represented as a ratio of two integers) and transcendental (it is never the solution of a polynomial equation that involves rational numbers). Pi is remarkable in its scope. Professor Yasumasa Kanada of the University of Tokyo writes computer programs that are designed to calculate pi, and has continually broken his own world records. Kanada has computed pi to well over one trillion decimal places and remains "intent on achieving new world records" (Arndt, Haenel, Lischka, and Lischka 1). Because of Kanada's work, pi is now the mathematical constant "which has been calculated to the greatest number of decimal places," (Arndt et al. 1). In addition to performing the calculations for pure pleasure, Kanada and other mathematicians study pi in search of patterns. Understanding pi would be a significant epiphany, a major evolution in human history.

So far pi has yet to reveal itself fully and the number remains a major mathematical mystery. Pi can be understood easily on its most basic level: that of Euclidian geometry. The fundamental realization that the wider a circle is "across," the longer it is "around" is what led to the discovery of pi in the first place (Beckmann 11). That discovery seems to have occurred in multiple cultures, as pi was studied among the ancient Mesopotamians, Egyptians, and Chinese. The ancient Greeks delved deeply into the study of pi, especially pi's relationship to geometry. Pi was revealed as a constant ratio not just of circumference to diameter but also of radius to area. The existence of both constants was well-known, but the fact that both constants were in fact one and the same number represented a major breakthrough. Arndt et al. note that the ancient Greeks first drew the connection between both ratios as they related to the circle. In 414 BCE, Aristophanes presented the problem known as "squaring the circle," which has become the quintessential problem of pi.

Pi has numerous applications, and not just in the world of geometry. Number theorists hope to discover meaning in the endless stream of digits represented by pi, and pi could in fact be meaningful to the study of theoretical physics. Arndt et al. point out that calculating pi sometimes depends on time as well as space. Pi is also meaningful for probability theories, such as the Wallis product (Arndt et al. 9). Moreover, pi may be related… [read more]

Greek Numeration Systems Thesis

Thesis  |  2 pages (612 words)
Style: MLA  |  Bibliography Sources: 2


Greek numeration system is one of the oldest in the world and still in use in many parts of Greece, especially for the ordinal numbers. The Greek numeration system was based both on its internal invention, as well as the constant interaction with some of the neighboring people, most notably the Phoenicians, the Egyptians and the Babylonians, all who had developed their own numeration systems and who thus influenced the Greek one.

There are two types of Greek numerations systems, depending on the moment they came into existence. The first type, predominantly referred to as Herodianic was used as early as 500 BC. As most of the old numeration systems, this was primarily an additive one, with the letters being allocated to the numbers based primarily on the first letter of the way the number was said. For example, penta was five, so the letter pi was designated to be the one representing the number five. In a similar manner, the letter symbol for 10 was the letter delta, which was because the number was referred to as deka and thus started with that respective letter.

This system was pretty much replaced with the Ionic system later on. The Ionic system was primarily based on the Greek alphabet, because it implied that for each unit 1 through 9, a letter of the alphabet would be allocated, a mechanism which was also applied for the tens (10 through to 90) and the hundreds (100 through to 900). However, the Greek alphabet only had 24 letters, which meant that three new ones were added for this purpose alone. These letters were digamma (an almost double gamma), qoppa and sampi. These letters were allocated for 9, 90 and 900 respectively.

At the same time, after each number thus written, a small sign would be added in the form of…… [read more]

Oxford Murders Matinez, Guillermo. Book Report

Book Report  |  1 pages (318 words)
Bibliography Sources: 0


Oxford Murders

Matinez, Guillermo. The Oxford Murders. MacAdam/Cage, 2005.

The Oxford Murders is the story of an unnamed Argentinean mathematician studying at Oxford. One day, while accompanied by his landlady's friend, the don and professor of mathematics Arthur Seldom, the two find Mrs. Eagleton murdered on her sofa. The only clue, other than the fact that the old woman worked on the Enigma Code during World War II, is a circle left by the killer in a mysterious note sent to Seldom, along with the lines, "the first in a series." Soon it becomes clear there is apparently 'serial' killing occurring, on a very literal level. Seldom receives a note, accompanied by a symbol, every time a murder takes place. Seldom fears that the killer is effectively parodying his mathematical work on theories of patterns or series in mathematics. One of Seldom's areas of expertise is Wittgenstein's theories about series and the possibilities for deviation in numerical series.

The…… [read more]

How Does My Calculus Class Help or Relate to a Business Management Major? Essay

Essay  |  3 pages (824 words)
Bibliography Sources: 0



The world of business is comprised of many unique disciplines. The manager can expect to synthesize all of them as part of their work. Therefore, a strong multidisciplinary background is essential in the pursuit of a major in Business Management. The subjects learned in this major will require a wide range of basic knowledge including economics, sociology, psychology and statistics. Calculus also plays a role, providing both a functional and a theoretical backdrop.

It is often considered that the basic functions of calculus are not used in the acquisition of a business education. However, understanding the fundamentals of calculus allows the student to derive formulae in financial courses. Students of Business Management should have a strong knowledge of corporate finance, and this requires some basic calculus. The models used to price derivatives are based on calculus. A stock option, for example, comprises an underlying asset with an intrinsic value and a fluid time value. This concept can be extended to any asset. Business Management is, at its core, managing assets. But to manage those assets requires the ability to understand how the value of those assets is derived.

Differential calculus forms a key component of the business world. Many important concepts in business management relate directly back to calculus. For example the yield curves on bonds, or the demand curve of a product relative to macroeconomic variables. There are many instances where a manager must interpret complex, interrelated and fluid variables in order to predict the future.

Integral calculus is useful when examining concepts involving fluidity. The business world is constantly changing. The numbers used to interpret the world in order to make managerial decisions are also constantly changing. The relationship between those numbers is also subject to constant flux. It is impossible to understand the business environment without understanding these relationships. A sound knowledge of calculus fundamentals allows for that.

The concept of limits also proves useful to the Business Management student. Management is a subject based on finding ways to achieve objectives. In many cases those objectives are quantifiable, thus to derive the best way to achieve those objectives requires calculus. The concepts, however, can also be applied to non-quantifiable objectives once the student understands the basic principles. The manager can then understand how to bring a variable such the organizational culture closer to a limit such as having a strong emphasis on integrity. Even without numbers, the principles of analyzing the relationships between variables remain the same.

A…… [read more]

Eudoxus of Cnidus Essay

Essay  |  4 pages (1,237 words)
Style: APA  |  Bibliography Sources: 3


Eudoxus of Cnidus

Boyer, in his "A History of Mathematics" gives a quote from Eudoxus that is quite self-descriptive of this genius, "Willingly would I burn to death like Phaeton, were this the price for reaching the sun and learning its shape, its size and its substance."

It is descriptive of the man from Cnidus because it shows us the mind of this genius, the curiosity he displayed during his lifetime and why he contributed so much, in particular, to the fields of mathematics and astronomy.

Unfortunately, all of his works are lost to history. We have snippets, pieces, basic facts about Eudoxus' life and work, and some words from others through the ages who have dug up what could be found and put it together in biographies and descriptions of his work.


We know that Eudoxus was born in Cnidus, Asia Minor (now Turkey). Actually, historical documents claim a birth sometime between 408 B.C. To 390 B.C. And his death at the age of 50 to 53 years old. Best guess is 408-355 B.C.

He is known for his revolutionary work as a mathematician, astronomer, and philosopher. However, at some point in his life he was also a theologian, meteorologist, doctor, and "http: geographer. He studied mathematics in Italy under the tutelage of Archytas, the Greek mathematician and philosopher. Many historians claim that Eudoxus worked with Plato in Athens, but others dispute whether there is enough data to support that and are unclear about this relationship between the two great intellectuals. (O'Connor & Robertson, 1999) Archytas and Plato were close friends, so it is possible that Eudoxus met Plato, and, perhaps, this too, could explain part of the confusion whether or not Plato and Eudoxus actually worked with each other.

It is somewhat clear from historical records that Eudoxus had little respect for Plato's analytic ability, but since Plato was not the mathematician that Eudoxus was, that is to be expected. It does not appear as if either had much influence on the other's work. (O'Connor & Robertson, 1999)

Diogenes Laertius, the Roman biographer of Greek philosophers, claims that Eudoxus did, indeed, study in Athens under Plato. However, some of Laertius' usually solid work has come under question by other scholars, and, since Laertius' lived in the third century A.D., we can't be certain he was correct, since, again, all of Eudoxus' work is lost. (Soylent Communications, 2008)

He traveled to Sicily where he studied medicine with Philiston. After that, we surmise, with the help of financial aid from friends, he went to Egypt to learn astronomy with the priests at Heliopolis, and made astronomical observations from an observatory located between Heliopolis and Cercesura. From there Eudoxus travelled to Cyzicus, in northwestern Asia Minor on the south shore of the Sea of Marmara. There he established his own school which proved to be quite popular. As a matter of historical record, it appears that Plato became somewhat jealous of Eudoxus' success with his school. Not much more is… [read more]

Math Webliography Coolmath4kids.com Term Paper

Term Paper  |  1 pages (413 words)
Style: MLA  |  Bibliography Sources: 10


Math Webliography

CoolMath4Kids.com (http://www.coolmath4kids.com/)

By Karen," this site is self-described as an "amusement park of math and more." colorful icons against a black background constitute child-friendly visuals, and menu items like "Number Monster" and "The Geometry of Crop Circles" are also guaranteed to please curious young and adult minds alike.

KidsNumbers.com (http://www.kidsnumbers.com/)

Not as visually appealing as it could be, Kidsnumbers is still a valuable resource tool for teachers and parents. Several "Let's Practice" sections encourage children to play and interact.

Mathcats.com (http://www.mathcats.com/) chalkboard cat icon welcomes children and their parents to the Web site, which several separate sections include "Math Cats Explore the World." However, the activities contained on the site are geared toward children older than the cute drawings would suggest. Mathcats is not for youngsters but is lacking in the sophistication that might draw a more mature audience.

TeachRKidsMath (http://www.teachrkids.com/)

Seemingly geared toward teachers instead of students, TeachRKidsMath is not as child-friendly as it could be. The exercises are, however, good resources for math teachers needing some activities for their students.

Wolfram MathWorld (http://mathworld.wolfram.com/)

Wolfram might indeed live up to its self-proclaimed subtitle, "web's most extensive mathematics resource." Containing a wealth of information on every mathematics topic of interest to advanced students from…… [read more]

PHI Golden Ratio Term Paper

Term Paper  |  19 pages (5,174 words)
Style: APA  |  Bibliography Sources: 6


History Of Phi, Mathematical Connections, And Fibonacci Numbers: Nature's Golden Ratio

Throughout history, humans have been seeking to define beauty in quantifiable and meaningful ways. For many observers, the connection between beauty and the rhythmic patterns evinced in the Fibonacci series is clear. While the Fibonacci series is named for an early 13th century Italian mathematician, the so-called Golden Ratio… [read more]

Memory Ronald T. Kellogg's Working Memory Components Term Paper

Term Paper  |  5 pages (1,248 words)
Bibliography Sources: 1



Ronald T. Kellogg's "Working Memory Components in Written Sentence Generation": A Review and Further Research Inspired by the Study

In his article, "Working Memory Components in Written Sentence Generation," Ronald T. Kellogg used quantitative research in order to how working memory is impeded during distraction. Kellogg begins his article with a literature review detailing the types of research that psychologists have completed regarding memory in the past. He cites Baddely's 1986 work, which created the Baddeley model, a "phonological loop for storing and rehearsing verbal representations, a visuospatial sketchpad for visual object representation and their locations, and a central executive for attentional and supervisory functions" (341). In addition to this model, Kellogg also more modern research by Jonides and Smith that suggests the "visual, spatial, and possibly semantic stores are dissociable from the verbal store" (341). By considering both this old and new research concerning working memory, Kellogg designed his own research in order to increase psychologists' understanding of working memory.


Kellogg designed his study in order to increase others' understanding of working memory. Specifically, Kellogg was interested in understanding if "planning conceptual representations" and "linguistically encoding these into words and sentences" depends on working memory (341). According to Kellogg, putting together a sentence, or sentence generation, requires "planning conceptual content," or deciding what one wants to say in writing, and "linguistically encoding it into a grammatical string of words," or placing those ideas and concepts into a well-formed, grammatically correct sentence (341). If one or either depended on working memory, a second purpose of the study was to review the strength of this dependence.


College students in a General Psychology class were chosen as the subjects to be tested. The students were assembled, given keyboards, and then given a visual prompt of two words, nouns. Subjects were then to write a "meaningful sentence" using the two nouns (344). At the same time, students were told to complete a "memory task," such as remembering certain digits. First, students' typing speed was assessed via trials. Next, students were given directions, followed by the two visual prompts, and then time to type their sentences and submit them to a computer. Finally, students were asked to complete the memory task they had been assigned correctly and were given feedback concerning whether or not they had made the correct response. Some students were asked to write sentences using nouns that were related, while others were asked to write sentences using nouns that were unrelated. In one group, students were asked to produce complex sentences; while in the other students were told to write simple sentences (344).

IV. Findings

In accordance with the methods above, the researchers derived results concerning, initiation time, sentence length and typing time, grammatical and spelling errors, and concurrent task performance. Only those students who were asked to remember six digits produced shorter sentences, while the memory tasks did not affect the length of any other students' sentences. Similarly, spelling and grammar were not affected by the memory… [read more]

Mathematician Biography and Works: The Mathematician Blaise Term Paper

Term Paper  |  4 pages (1,353 words)
Style: Chicago  |  Bibliography Sources: 3


¶ … Mathematician

Biography and Works: The Mathematician Blaise Pascal

The Life of Pascal

Blaise Pascal, along with Rene Descartes, is the rare case of a mathematician equally famous for his religious devotion and contributions to theology as he is for his work with numbers. In fact, Pascal would likely prefer to be remembered as a philosopher of religion rather than a mathematician, theorist, and scientist, as he is today. One biographer of famous mathematician tartly observed that Pascal's "mathematical reputation rests more on what he might have done than on what he actually effected, as during a considerable part of his life he deemed it his duty to devote his whole time to religious exercises" (Ball 1908). However, other biographers have seen Pascal's religion and mathematical gifts as complementary.

For example, "Pascal's Wager" is based upon the proposition that a person should believe in God because as a bet, the idea makes sense. There is so much to be gained potentially by believing in God if one is 'correct.' Likewise, there is so much to lose if one is an unbeliever, and one is 'incorrect.' Conversely, there is little to be lost by a person whose belief is in error. A proof of Pascal's Wager might look like this: 1) the probability of God's existence is 50/50. (2) Wagering for God brings infinite reward if God exists (Hajek 2005). If God does not, there is no net loss. Wagering against God brings no gain, and a great loss.

Despite the modernity, even humor, inherent in such moral calculations, Pascal was largely a man of his time, and a devout Christian. Blaise Pascal was born during the 17th century at Clermont on June 19, 1623, and died in Paris on August 19, 1662. Although the Frenchman's early education was confined to modern languages, when his father noted that the boy had unusual mathematical aptitude in geometry (Pascal intuited as a child why the sum of the angles of a triangle is equal to two right angles), his father gave his son a copy of Euclid's Elements. It would not be an understatement to call the young Pascal a prodigy. At the age of fourteen Pascal was admitted to the weekly meetings of French geometricians, at sixteen he wrote an essay on conic sections and at the age of eighteen, he constructed the first arithmetical machine, a kind of prototypical adding machine or calculator (Ball 1908).

However, Pascal suddenly abandoned mathematics in 1647, "after being advised to seek diversions from study and attempted for a time to live in Paris in a deliberately frivolous manner," because of his health ("Blaise Pascal," Island of Freedom, 2008). Pascal's interest in probability theory "has been attributed to his interest in calculating the odds involved in the various gambling games he played during this period" ("Blaise Pascal," Island of Freedom, 2008). However, Pascal's account in his Pensees is different. He says wished to "contemplate the greatness and the misery of man" in a purely religious… [read more]

Math Curriculum Development Term Paper

Term Paper  |  13 pages (4,174 words)
Style: APA  |  Bibliography Sources: 10


Math Curriculum

Science and its modes of studies are very much reliant on the mathematical techniques which have been heavily evaluated over the past few years. Numerous studies like the ones conducted by Mike Cass et al. (2003) and Lynn T. Goldsmith and June Mark (1999) have analyzed both the teaching techniques used for mathematics as well as the overall… [read more]

Pythagoras, the Pythagorean Theorem and Its Relationship Term Paper

Term Paper  |  3 pages (881 words)
Style: APA  |  Bibliography Sources: 3


¶ … Pythagoras, the Pythagorean theorem and its relationship to the area of a circle.

Biography of Pythagoras:

Pythagoras was a Greek sage of the 6th century B.C.. He was born on the Greek island of Samos, off the coast of Asia Minor. Pythagoras was introduced to mathematics by Thales of Miletus and his pupil Anaximander, according Iamblichus, the Syrian historian. He traveled to Egypt, around 535 B.C., to continue his studies, but was captured by Cambyses II of Persia, in 535 B.C., and was taken to Babylon ("Pythagorean," 2007).

Eventually, Pythagoras emigrated to the Greek colonial city-state of Croton, in Southern Italy (Mourelatos, 2007; "Pythagoras," 2007).

Pythagoras was a "teacher and leader of extraordinary charisma. Pythagoras founded in Croton a society or brotherhood of religious-ethical orientation. The society fostered strong bonds of friendship and a sense of elitism among its initiates through ritual, esoteric symbolism and a code of rigorous self-control, including lists of taboos" (Mourelatos, 2007). This was known as Pythagoreanism. Pythagoreanism became politically influential in Pythagoras' home town of Croton, and eventually spread to other cities in the region ("Pythagoras," 2007).

Pythagoras' teachings were basically ethical, mystical, and religious. He believed in the transmigration of souls from one body to another, known as metempsychosis, either human or animal.

It's unclear whether Pythagoras believed that this led to the immortality of the soul; however, it did lay the foundations for some of the practices of the Pythagorean society he founded. These included vegetarianism and the rituals of purification, in an effort to promote the chances of superior reincarnation (Mourelatos, 2007).

A legend grew around Pythagoras, according to Mourelatos (2007), involving superhuman abilities and feats. However, he believes that this legend was based on the historical reality that Pythagoras was a Greek shaman. Some modern scholars theorize that the religious movement of Orphism, as well as Indian and Persian religious beliefs, influenced Pythagoras.

Although Pythagoras' contemporaries honored him as a polymath, modern scholars question this. Today, many "discount the tradition that he was the founder of Greek mathematics, or even that he proved the geometric theorem named for him" (Mourelatos, 2007).

Pythagoras died in Metapontum, near modern-day Metaponto, in approximately 500 B.C. ("Pythagorean," 2007).

History of the Pythagorean Theorem:

The Pythagorean theorem holds that "the square of the hypotenuse of a right triangle is equal to the sum of the squares of its other two sides" (Meserve, 2007). During Pythagoras' lifetime, the square of a number was represented by the area of a square with the side of a length of that number. With this representation, the Pythagorean theorem can then be stated as "the area of the square…… [read more]

Finding the Diameter Term Paper

Term Paper  |  3 pages (899 words)
Style: MLA  |  Bibliography Sources: 2


Diameter Problem

In this experiment, measuring of the diameter of the sun relative to the average radius of the earth's orbit requires some basic geometric knowledge, particularly in the properties of angles. Understanding these properties will provide us the mathematical relationship between the data provided in the problem and the data to be taken throughout the experiment. We must consider, however, that the mathematical aspect is just half of the story. Reckless experimentation often results to errors which are intolerable.

The choosing of favorable conditions under which an experiment is to be done, the careful set up of the materials to be used, and the precise measurements of data all contribute to finding the best result to the experiment. These three factors, however, are subjects to a certain degree of errors no matter how careful the person conducting the experiment may be. Human error is always present; and materials have limitations that affect the result of the experiment. Precision of the materials used and of the measurements taken are determining factors of how precise the result is.

As I conducted this experiment, however, I carefully set up the materials needed. I used a flat mirror covered by a white paper with a hole in the middle. I measured the hole to be approximately seven (7) millimeters. Finding a suitable place for the experiment was relatively easy; the hard part was putting the mirror in the right angle such that the image of the sun on the beige-colored wall was not oblong or ellipse, but rather very close to a circle; and what's even harder was setting the distance of the mirror to the reflected image at almost exactly six (6) meters. I finished setting up at 3:07 in the afternoon. Afterwards, starting at 3:10, I took the required measurements with ten-minute intervals.

Treating then the problem mathematically and scientifically, I had to identify what is required in the problem, what data are given, what variables are needed to be derived, and what formulas are involved in the solution. This step-by-step procedure is the key to getting the desired result. Hence, the required in the problem is D: the diameter of the sun; the given datum is L: the average radius of the earth's orbit which is equal to 150,000,000 kilometers; the derived measurements are: the distance "l" from the mirror to the reflected image on the wall, and the diameter "d" of the image itself; and the formula to be used is simple ratio and proportion, which is: D:L = d:l or D/L = d/l. From this formula, we can derive the formula for D. where we can supply our data. Hence, we get D = (d/l)

L. The…… [read more]

8th Grade Math Introduction to Fractals Lesson Term Paper

Term Paper  |  2 pages (525 words)
Style: APA  |  Bibliography Sources: 0


8th Grade Math

Introduction to Fractals

Lesson Title:

Why Study Fractals and What Are They?

Learning Objectives/EALRs:


Investigate situations and search for patterns


Extend mathematical patterns & ideas to other disciplines

Describe examples of contributions to the development of mathematics


Recognize use of mathematics outside the classroom



Quotes by Mandelbrot - "Fractals represent a new geometry that mirrors the universe."

Quote from Fractals, the Patterns of Chaos, p. 70:."..whether the fractal is..."

Definition of a fractal


Summary of lives of Sierpinski,

Mandelbrot & Koch [

Page 1 & Page 2]

Internet sites loaded on computers ahead of time:

Fractal of the Day: http://sprot.physics.wisc.edu/fractals.htm

African Fractals:


Photo posters:

Sierpinski, Mandelbrot, Koch, Mandelbrot Set, Sierpinski Gasket, Snowflake

Butcher paper for chart

Set up:

Schedule computer lab

Overhead projector

Collect overheads & visuals

Display visuals on white board for reference and interest

Draw chart on butcher paper for recording student ideas


We're exploring and collecting ideas and perceptions about fractals because:

They're something fairly new in math.

We, as 8th graders, can understand lots about them.

Fractals often look like objects in nature.

Point out the photos displayed of Sierpinski, Mandelbrot, and Koch and the fractals they are associated with.

Distribute the mathematician background handout and briefly talk about Koch, Sierpinski, and Mandelbrot. Ask students to look for mention of these names as they browse the fractal internet sites marked on their computers.

Search the following internet sites. Go to Fractal of the Day:


Number a piece of paper 0-15. View the fractals for today and…… [read more]

Hart, B. And Risely, T ) Term Paper

Term Paper  |  2 pages (631 words)
Style: APA  |  Bibliography Sources: 0


Hart, B. And Risely, T (2003). The early catastrophe: The 30 million word gap by age 3. American Educator, Spring

Fairly commenting on an investigator's research endeavor is a task that must be taken seriously. Although it is quite easy to have an opinion of another's research, it is something quite different to be able to evaluate the research activity in terms of topic specificity and soundness, intent or purpose, data analysis, and informational importance. The focus of this paper was on whether or not the research investigators of the above cited research publication were prudent in stating a research question and a testable hypothesis along with informing the reader of the chosen research design, statistical data analysis and reporting the results, limitations, limitations and implications for future practice - all of which must lead to a best fit research decision.

The authors of this particular research report not only failed to state a research question and testable null hypothesis but selected a sample (N=42 families) on a non-random basis. In fact the sample selection was reported as being "pre-selected." As such any results garnered from a statistical data analysis can only be inferred back to the selected population and not to a wider universe of language growth deficient children. In fact, the authors set out to examine language deficiency of lower income children yet, included in their analysis a disproportionate number of upper income (13), middle income (10), lower income families (13), and welfare (6). Not only was there disparity among family selection the authors failed to report how many children were included in each of the four socio-economic status categories, thus producing error contamination of the results.

In addition to the errors associated with failure to state a research question and testable null hypothesis the authors of the article failed to alert the readers that a cross-sectional research investigation is point in time…… [read more]

Archimedes Many Experts Consider Term Paper

Term Paper  |  3 pages (794 words)
Style: MLA  |  Bibliography Sources: 3



Many experts consider Archimedes to have been the greatest mathematician of his era. The contributions that he made to the field of math, including geometry are considered phenomenal. In addition he is often credited with understanding and anticipating the advent of calculus 2,000 years before it happened. When he was not busy cracking the code to mathematical equations he spent his time inventing machines that included the pulley. Today, many commonly used mathematical concepts are directly related to the mind and development of Archimedes (Archimedes of Syracuse (http://www-groups.dcs.st-and.ac.uk/~history/Mathematicians/Archimedes.html).


He began his life in 287 BC in a city called Syracuse by the sea in Sicily, Italy. His birth date was determined when he died and those who knew him claimed that he was 75 years old at the time (Biography (http://en.wikipedia.org/wiki/Archimedes#Biography).

He was born to a father who spent his life as an astronomer.

He spent his life building the foundation for many of today's mathematical calculations, formulas and concepts as well as providing the world with valuable inventions like the pulley. He died in 212 BC in the middle of the Second Punic War.

According to the popular account, Archimedes was busy contemplating a mathematical drawing in the sand. He was interrupted by a Roman soldier and replied impatiently: "Do not disturb my circles." The soldier was enraged by this, and killed Archimedes with his sword (Biography (http://en.wikipedia.org/wiki/Archimedes#Biography)."

Discoveries and Achievements

Many experts refer to Archimedes as the first math physicist. He contributed the foundation for the later works of Newton and Galileo. One of the things he is most well-known for discovering is the principal behind buoyancy. Legend has it that a crown was prepared for a king and Archimedes was asked to verify its gold qualities and to determine whether gold had been placed in it as well.

He was asked to make these determinations without destroying the crown so he figured out that the density of the crown would determine how fast it would sink in liquid.

Another achievement of his was the Archimedes screw. This invention is a machine that has a revolving screw shaped end that was often used to transfer water from low lying bodies to irrigation canals.

Archimedes cannot be credited with inventing the level however, he was the one who developed the principles that explained how a lever works.

His Law of the Lever states: Magnitudes…… [read more]

Mathematical Knowledge for Teaching Article Review

Article Review  |  4 pages (1,055 words)
Bibliography Sources: 0


Mathematical Knowledge in Education

Differentiating Types of Mathematical Knowledge and Relevance to Education

Ball, D.L., Lubienski, S., and Mewborn, D. (2001). "Research on teaching mathematics:

The unsolved problem of teachers' mathematical knowledge." (In Handbook of Research on Teaching. New York: Macmillan).

Generally, mathematics proficiency among teachers corresponds to higher achievement in their students. While that conclusion has been supported by a substantial volume of empirical research, much less empirical research has been devoted to trying to understand how and why teacher achievement in mathematics benefits student outcomes, or what it is about mathematics, specifically, that generates this apparent relationship. Most importantly, there is a need to understand whether and to what extent teacher mathematics achievement in different aspects of mathematics matters with regard to the positive effect on learners.

According to the authors of this article, there is a fundamental difference between teaching mathematics and teaching through mathematics. In many ways, that distinction helps explain why, in general, mathematics proficiency among teachers tends to correspond to better learning outcomes. More particularly, understanding that distinction may help explain why the positive benefit of mathematics knowledge among teachers is much more evident in connection with their academic study of mathematical method than in connection with their academic study of advanced mathematics. Furthermore, it could explain why advanced mathematical achievement among teachers also corresponds to higher incidence of negative affects on some learners whereas that is not true in the case of teachers whose high achievement in mathematics relates more to their non-pedagogical content knowledge than to their pedagogical content knowledge.

In principle, the value of teaching mathematics is much broader than the value of the substantive material, particularly in contemporary society that provides instant and accessible electronic calculation to solve the types of mathematical problems that could typically arise in everyday adult life. Study after study suggests that teachers who are more knowledgeable about mathematics tend to promote learning better than teachers who are less proficient in mathematics.

However, there is evidence that suggest that this relationship is much more complex than simply a direct transfer of pedagogical mathematical knowledge. For example, one unexpected finding is that the benefit of greater mathematics proficiency exists in the first grade. Presumably, all teachers are equally proficient at first-grade addition and subtraction; moreover, the academic study of mathematics in greater depth (i.e. post-calculus) should not have any impact on the level of teacher understanding of first-grade mathematics concepts. Similarly, there is no intuitive reason that either the mathematical proficiency of teachers or their highest level of mathematical study should translate to better teaching of elementary mathematical concepts. In that background, the correspondence between teachers having studied mathematical method and the highest identifiable benefits to learning seem to explain the basis of the phenomenon.

Specifically, mathematics (especially at the elementary level), can be taught rigidly and by rote rule or by conceptual understanding. Apparently, teachers with more extensive experience in studying mathematical method are better equipped to deliver mathematics lessons in a manner conducive to inspiring… [read more]

Human Factors Affecting Safe Operation Data Analysis Chapter

Data Analysis Chapter  |  15 pages (4,150 words)
Bibliography Sources: 2


¶ … Human Factors Affecting Safe

Operation Of The UAV

Study of Selected Human Factors affecting safe operation of the UAV

This chapter presents the findings of the thesis. The survey questionnaires are collected from the 35 respondents. The data are collected to test the following hypotheses:

Ho: "Majority of UAV pilots do not agree that graduating from Undergraduate Pilot… [read more]

Improved Your Knowledge, Skills, Abilities, and Yourself Essay

Essay  |  2 pages (704 words)
Bibliography Sources: 0


¶ … Improved Your Knowledge, Skills, Abilities, and Yourself in This Session Through This Course

The mathematical skill that was taught in the course is necessary for a career that may be related to business, investment and analysis. In other words the course shows the path to business mathematics by the use of exponential, and logarithmic functions that management use in the Managerial information systems -- MIS and this is also used with functions, set theories and other allied matters thought to analyze data and solve problems related to the market, and necessary decisions to be made can be based on these knowledge. To that extent I feel I have gained a lot. The overall experience is that the course has given me the ability to attend to problems that I once feared. Due to this course the approach to math, which I always approached with trepidation has changed and am now ready and willing to go further in exploring mathematics, both for its academic interest and also as a useful tool for me in my daily work.

2. Evaluation of the work you did during the session for the class and explanations of ways you could have performed better

I have been introduced and provided guidance and training to seek the results of complicated functions that can provide with answers to every day questions that I may have to answer in the course of my trade or occupation and generally in life. These include the data sets. The lessons on functions were tough and I believe that I could have put in better effort there. The functions are still hazy but I found the use of the calculation of simple things like interest, and the profitability etc. As something very useful to me. I did concentrate more on them and perhaps they may have caused the problems in my understanding of the other topics. I believe that I could have done better with the study of functions, and I performed rather well but I could have done better.

There were small gaps in my understanding probably a result of my anxiety…… [read more]

Linear Regression Models (Meier, Chapter 18 Article Review

Article Review  |  5 pages (1,293 words)
Bibliography Sources: 5


Linear Regression models (Meier, Chapter 18 / 19)

These are used in order to determine whether a correlation (or relationship) exists between one element and another and, if so, in which direction (negative or positive).

The two variables are plotted on a graph. Independent variable on the x line (horizon); y- variable (dependent) on the vertical line. The pattern between them is called the 'slope'. The point where X and Y intersect online is called 'intercept'.

The theorem used tells us that the slope of the line will be equal to the change in x (IV) given changes of y (DV). The shape of the slope (their direction and gradient) describes the relationship between X and Y.

Linear regression, as are the previous models, is used apply results population sample to population as a whole. >Linear regression is also useful for predicting occurrences in that sphere. For instance, linear regression may be used to determine whether there is a correlation between vehicle collisions and rainy days. If so, one can predict that the stormier the weather the greater the quantity of collisions.

Goodness of Fit

We will want to know the amount of error i.e. how well the regression line fits the data. The distance a point is from the regression line is known as error. A certain calculation exists to find this out. Another goodness of fit measure is the standard error of the estimation where a calculation is used to find out the extent to which the results of the sampled population will correspond to the population as a whole. Thirdly, the coefficient of determination is used to measure the total variation in the independent variable (X). Complex calculations exist for this. (All of these calculations can be worked out by special computer programs too).

Linear regression has various assumptions:

1. For any value in X, the errors in predicting Y are normally distributed with a mean of zero.

2. Errors do not get larger as X becomes larger; rather the errors remain constant throughout slope regardless of the X value.

3. The errors of Y and X are independent of one another.

4. Both IV and DV (X and Y) must be interval variables (i.e. numerical data).

5. The relationships between X and Y are linear.

Ignoring these assumptions will result in faulty statistical conclusions.

Topic 2: Comparing 2 Groups

A researcher may run the same study on two different groups with one, for instance, acting as control and the other as experimental. He may then want to know whether differences are observed between the two groups.

1. Research and null hypothesis are drawn up stating that: (a) significant difference will be found, (b) significant difference will not be found between both groups.

e.g. Alternative Hyp. H1: Employees who have taken *program will have higher job scores

Null hyp (H0): There is no difference in scores between employees who have taken program and employees who have not.

2. Mean and standard deviation of each group is calculated… [read more]

NBA Stats Term Paper

Term Paper  |  2 pages (614 words)
Bibliography Sources: 0


1 (for points). Mean height is 78.7 and mean points is 16.843, so calculating the covariance of this data pair would look like this:

(72 -- 78.7)(13.1-16.843) = 25.0781

In order to develop the value for each pair, two new columns were created in the Excel sheet to calculate the difference of the data point and the mean with the following formulas:

=a2-78.7 (for height, pasted into the other 99 rows which automatically adjust the cell value for each row, as =a3-78.7, a4-78.7, etc.), and =b2-16.843 (for points, also pasted).

A third column with the following formula created the product of each row in these two columns:

=c2*d2 (pasted for all rows).

The average function was used to determine the mean of this last row, which is the covariance: 2.1679. Standard deviations are 3.60274867 (for height) and 3.74805687 (for points), meaning Pearson's coefficient is calculated as:

2.1679/(3.60274867 * 3.74805687) = 0.160545859.

Linear regression attempts to find the line of best fit for a data sample. The basic equation of any line for variables x and y is given as y = a + bx. For a linear regression slope (b) is calculated as:

((n*?xy)-(?x)(?y)) / ((n*?x2)-(?x) 2)

Adding another column to the spreadsheet enabled the quick calculation of height * points (xy) for each data pair, and a column for x2 was also added; the SUM function was used to calculate ?xy (132771.2), ?x (7870), ?y (1684.3), and ?x2 (620654). With n (population size) 100, the slope (b) of the equation becomes:

((100*132771.2)-(7870)(1684.3))/((100*620654)-(78702) = 0.168708171.

The intercept (a) is calculated as:

(?y-b (?x))/n which with the substituted values becomes

(1684.3-(0.168708171*7870))/100 = 3.56566694.

The full linear regression equation for this data set, then, is given as:

points = 3.56566694 + (0.168708171*height)

Chi-square analysis cannot be applied…… [read more]

Devise a Standard of Existence Rule Essay

Essay  |  4 pages (1,190 words)
Bibliography Sources: 4


¶ … Existence / Rule for Existence

Existence is a philosophical question that has eluded thinkers for centuries. From as early as ancient Greek, philosophers have sought to define existence as a concept to encompass not only the physical world, but those objects that exist in different non-physical plains. It is these issues that present the challenge in determining the true meaning of existence and non-existence.

An object exists when it has a form that is not in violation of any universal rules or truths. A form is any physical, metaphysical, or cognitive presentation. Universal rules are those derived in science and mathematics such as gravity, mass, geometry, and algebra. Truths are those statements that are absolute and cannot be refuted. When held against this definition of existence, a horse, the number four, and a unicorn exist whereas the square circle does not exist.

A horse is the most obvious of the items that exists. The reason is the horse fulfills the definition perfectly. First, the horse exists in a form, two forms actually. The first form is its archetypical form. This is the form in the mind that creates the definition of a horse. A object exists in an archetypical form when the mentioning of the object brings a specific image or definition to the mind. In this case, when the word "horse" is mentioned, a person immediately conjures up the image of a four-legged mammal with hooves, main and tail. Horses also exist in a physical for as well. Their physical form exists in the third dimension along-side humans. This means that horses can be touched, smelled, heard, watched and interacted with. It is these features that even greater solidify the horse's existence within the mind. Now to address the second and third parts of the definition. A horse is not in violation of any universal rules or truths. Its very definition, in fact, is solely derived from its physical form and the observations thereof. So, a horse meets the full criteria of an existing object and therefor does exist.

The number 4 also exists, except it does not exist in the same way that a horse exists. Unlike a horse, the number four is not a living thing, in the sense that it breaths, eats, or grows. It does still, nonetheless exist. The number 4, like the horse, has two forms. The first form is the archetypical form. When the number 4 is mentioned, those trained in mathematical law immediately conjure up the image. While this time the image is not as physical as it was with the horse, the concept can still be conjured within the mind and is solidified when tied to another object such as the horses. When 4 horses is mentioned, it becomes even easier to envision the number 4 in use. The second form that the number four can take is physical. Once again, unlike the horse, the physical form is not alive, but it still exists. This form, commonly referred to… [read more]

Euclid of Alexandria: 325 Term Paper

Term Paper  |  2 pages (527 words)
Bibliography Sources: 1+


"; a clear insight into his apparent wisdom. The second story concerns a student who, after his first lesson, asked what he would gain in life from learning such things as he was in the school; Euclid called his slave and said: "Give him a coin since he must make gain by what he learns."

Arabian and Syrian writers have said Euclid's father was Naucrates, and his grandfather, Zenarchus. They also said he was a Greek who was born in Tyre and lived in Damascus. Unfortunately, most of this information has little evidence of validity. Added perplexity began around 14th century when the Byzantine writer Theodorus Metochita (d. 1332) wrote "Euclid of Megara, the Socratic philosopher, contemporary of Plato." This Euclid, Euclid of Megara, lived around 400 B.C.E. And was actually a pupil of Socrates who founded a philosophical school, which Plato did not like. Nothing is known of Euclid's death.

Other works of Euclid include: The Data, for use in the solution of problems by geometrical analysis, On Divisions (of figures), The Optics, and The Phenomena, a treatise on the geometry of the sphere for use in astronomy. His lost Elements of Music may have provided the basis for the extant Sectio Canonis on the Pythagorean theory of music.


Gillispie, Charles C. ed. The Dictionary of Scientific

Biography, 16 vols. 2 supps. New York: Charles Scribner's

Sons, 1970-1990. S.v. "Euclid: Life and Works" by Ivor


Heath, Thomas L. The Thirteen Books of Euclid's Elements, 2

vols. Cambridge: Cambridge University Press, 1926.

Frankland, William Barrett. The first book of Euclid's Elements…… [read more]

How Do We Combat Math Anxiety? Research Paper

Research Paper  |  5 pages (1,548 words)
Bibliography Sources: 5


Math Anxiety

How to Combat Math Anxiety

Causes for Anxiety

How to Begin to Help

What Schools Can Do

What Parents Can Do

Albert Einstein once stated, "Do not worry about your difficulties in mathematics; I assure you mine are greater."

Yet no matter how great this man's difficulties were, for those suffering from math anxiety, any math problem is… [read more]

1234. . .
NOTE:  We can write a brand new paper on your exact topic!  More info.