Study "Mathematics / Statistics" Essays 56-108

1234. . .Last ›
X Filters 

Bio-Statistics Research Activities, Whether Clinical Term Paper

… In other words, the authors did not build a medical or healthcare-based paradigm for the study. Following a well-defined research question the research investigators' task is to follow-up with a statement of a testable null hypothesis or hypotheses. The null… [read more]

Mathematics George Cantor Term Paper

… When people began to understand that mathematics could influence everything from photography to art and science, they took a greater interest in mathematics and philosophical thought, which in turn led to even more innovation and scientific thinking. In fact, today, many scientists and mathematicians feel Cantor's work represented a real paradigm shift, or a radical change in mathematical and philosophical thought ("Cantor"). Two of his lasting models that showed his theories were "Cantor's Comb," which showed all points were disconnected from each other, and "Cantor's Dust," which calls these disconnected sets "fractal dust" (Breen). Cantor's work really revolutionized mathematics, and encouraged people to think philosophically about just what numbers really were.

Unfortunately, Cantor's mental health deteriorated as he aged, and he had several nervous breakdowns in reaction to criticism of his work. Today, many believe he suffered from bipolar disorder ("Cantor"). It is sad to think that Cantor may have contributed even more to the mathematical world had he not suffered from mental disorders, as he often stopped working during his bouts with depression.

Another German mathematician, David Hilbert, described Cantor's work as "the finest product of mathematical genius and one of the supreme achievements of purely intellectual human activity" (O'Connor and Robertson). Cantor's theories changed the way mathematicians thought about infinity and sets, and translated into many areas of society. Cantor's work is still questioned and studied today, but the importance of his theories is not questioned.


Author not Available. "Georg Cantor." 2004. 13 April 2004.

Breen, Craig. "Georg Cantor Page." Personal Web Page. 2004. 13 April 2004.

Everdell, William R. The First Moderns: Profiles in the Origins of Twentieth-Century Thought. Chicago: University of Chicago Press, 1997.

O'Connor, J.J. And Robertson, E.F. "Georg Cantor." University of St. Andrews. 1998. 13 April 2004.

Transfinite Number." Van Nostrond Company, Inc. Van Nostrand's Scientific Encyclopedia. Princeton: Van Nostrand, 1968.… [read more]

Germane Quality of Mathematics Research Paper

… Mathematical puzzles are a longstanding facet of mathematics that have numerous applications in the world today. In this respect, there is a significant amount of fascinating information regarding this element of mathematics. This document will concentrate on several different facets of mathematical puzzles, beginning with their history -- which extends as far back as nearly the history of mathematics. It will also detail some of the actual mathematical principles at work in examples of some mathematical puzzles. Additionally, the paper will provide real-world examples of how mathematical puzzles have shaped society at various points in time. Cumulatively, these three points will attest to the immense importance ascribed to mathematical puzzles in the past and present.

In researching the history of mathematical puzzles, it is nearly impossible to distinguish that history from the history of mathematics itself. Some sources date the history of these puzzles from at least 1800 BCE. And their deployment by the Egyptians (Kent, N.D.). Interestingly enough, there are numerous principles of mathematics that are directly descended from the Egyptians themselves, which helps to buttress the viewpoint that math-based puzzles coincided with the history of mathematics in general. However, there is evidence of Egyptian mathematics puzzles dating back 3,600 years (1650 B.C.) and which are strikingly similar to the riddle about the man going to St. Ives with seven wives (NY TImes). This Egyptian text was preserved on papyrus, which presages the notion of using textbooks for math puzzles. These puzzles have descended into modernity orally (such as in chants and riddles) and through the formal implementation of textbooks. Midway through the 20th century non-cooperative math games provided the basis of John Nash's game theory.

The mathematics of math games is actually fairly diverse, and largely hinges upon which particular math game one is playing. Still, there are some general principles that apply to most of these games. For instance, cardinality is typically important in math games. Cardinality is…… [read more]

Different Components of Statistical Testing Essay

… Statistics in Research: Different Factors to Consider

Statistics in research take two primary forms: that of inferential vs. descriptive statistics. Descriptive statistics, as the name suggests, merely seeks to describe a particular phenomenon as it exists numerically. Examples of descriptive statistics include determining the mean, median, mode or midrange of a particular set of figures or establishing a correlation between two sets of data (Taylor 2015). Presenting statistics in a graph is also considered descriptive in nature (Taylor 2015). Inferential statistics, in contrast, are used when it is impossible to assess data about an entire population group. "It is typically impossible or infeasible to examine each member of the population individually. So we choose a representative subset of the population, called a sample" (Taylor 2015). A good example of this is polling after an election: since it is impossible to accumulate data about all of the voters, a demographically representative group of voters may be polled after they vote. Multiple measurements are often taken in the case of inferential statistics to ensure greater accuracy.

Another distinction in regards to statistical findings is the question of statistical significance. All statistics contain some margin of error. Statistical significance means that given the sample size and the probability of error, the computed difference is still likely to be true. For example, "a difference of 3% (58% for women minus 55% for men) can be statistically significant if the sample size is big enough" ("Statistical vs. practical significance," 2015). However, merely because a sampling is statistically significant does not necessarily mean it is practically significant. Practical significance means that the finding is notable enough that it will have a material impact upon decision-making in the real world. Factors may include cost, feasibility, and the extent to which the intervention would have a meaningful and demonstrable effect on the quality of participants' lives, given the size of the effect ("Statistical vs. practical significance," 2015).

There are two major types of errors in statistical analysis: Type I and Type II (Hopkins 2013). Type I is when a study is overly sensitive and over-estimates the magnitude of the effect of the study which often occurs without an appropriate use of a control; Type II is when the effect of the intervention is underestimated (Hopkins 2013). A Type II error often occurs when too small a sample size is selected (Hopkins 2013). Another type of error is that of bias, either unintentional or intentional upon the part of the study design (Hopkins 2013).…… [read more]

SPSS Data Analysis Research Paper

… However, to determine the strength of this relationship a Pearson's product-moment correlation coefficient (r) can be calculated for these two variables. Based on the SPSS results, there is a very strong, statistically significant correlation between hours and scores [r (18) = .967, p < .01, two-tailed]. The percentage of the variation in the dependent variable due to the independent variable is also very high (r2 = .934), which suggests that the average number of hours per week studied may be responsible for 93.4% of the final exam grade; however, a correlation cannot determine causality, only that there is a strong association between the two variables.

There are a number of potential ethical considerations concerning how the data was collected in this study. Of primary concern was the possibility that average hours studied could influence final grades for the course, but the professor collected this data at the end of the semester during the final examination. While there is still a potential ethical concern, the assumption is that the final exam was graded before the data was viewed and analyzed. Even so, most students would expect the hours data and final exam scores to remain confidential during the grading period.


The predictor variable (Y) is the dependent variable, which in this study were the final exam scores. The criterion variable (X) is the independent variable, or in this case the average hours of study per week. The form of the basic linear regression equation is Y = ?0 + ?1X1 + ?2X2 + & #8230; nXn + ?n, with ?0 representing the Y-intercept, ?1-n representing the slope, and ?n representing the errors of prediction. The values of the dependent variable can therefore be estimated by the regression equation: Y = 47.918 + 2.619*X, with ?1-n = 2.619, t (18) = 16.014, p < .0001. For example, if a student wanted to know what grade they might get if they studied 15 hours a week then he or she would substitute 15 hrs for X in this equation and get a predicted final exam score of 87. The accuracy of this prediction depends on the amount of variation in the data, which is the difference between the best fit line and the experimental values (Y -- Y' = ?) and is given as the standard error of the estimate (SEE = 3.842). These calculations allow students and professors to predict how much studying must occur on…… [read more]

Statistics in Criminal Justice Discussion Chapter

… M8D1

Questions like this one make me wonder whether I should give an honest response or give the response that I believe is being sought by the question. In all honestly, my Minimal Statistics Baselines (MSB) is zero. I am honestly not committing to any MSB from this point forward; if I happen to be able to live without ever using statistical analysis again, then I will not be taking any action to attempt to incorporate statistics into my life. I do not intend to read newspaper articles with the purpose of understanding the statistics or look up research articles at a library for the purposes of understanding the statistical analysis.

However, while I have no intention of taking steps to have any type of MSB because of an intentional focus on statistics, I am well aware that I need to use and understand statistics to be able to function as an informed adult. Thinking about election season and the apparently at-odds poll results that are always being touted to support different issues, I realize that understanding how that data was obtained and analyzed is critical to being able to understand that information. As a parent, my child will take standardized tests, and I will need to understand basis statistics in order to understand what test scores mean. If I am ill and looking at potential therapies, I will need to understand the relative advantages and disadvantages of different treatment modalities, and understanding those requires understanding statistics. Depending on where my career takes me, I may find myself…… [read more]

Normal Distribution Central Limit Theorem and Point Estimate and an Interval Term Paper

… Normal distribution is very much what it sounds like. This distribution is symmetrical and is shaped like a bell when graphed on the Cartesian plane. The normal distribution has the mean figure, the median figure and the mode all basically located at the same place on the distribution. This occurs at the peak and the frequencies will gradually decrease at both ends of this bell shaped curved.

Unfortunately this is simply a model of looking at a problem and no definite predictions can be made with this or any other statistical tool, however this model does have real practical value. Many things in life follow this model and are normally distributed offering a least a guide in how to best understand and predict behavior mathematically using statistics.

Suppose X is normal with mean ? And variance ?2. Any probability involving X can be computed by converting to the z-score, where Z = (X?

)/?. Eg: If the mean IQ score for all test-takers is 100 and the standard deviation is 10, what is the z-score of someone with a raw IQ score of 127?the z-score defined above measures how many standard deviations X is from its mean. The z-score is the most appropriate way to express distances from the mean. For example, being 27 points above the mean is useful if the standard deviation is 10, but not so great if the standard deviation is 20. (z= 2.7, vs. z= 1.35).

Question 2

The central limit theorem states that the distribution of the sum of a large number of independent, identically distributed variables will be approximately normal, regardless of the underlying distribution. The importance of the central limit theorem is very widespread as it is the reason that many statistical procedures work. Regardless of the population distribution model, as the sample size increases, the sample mean tends to be normally distributed around the population mean, and its standard deviation shrinks as n increases.

To use the central limit theorem the sample size must be independent and large enough so a decent amount of data can be formulated to utilize this statistical tool. When taking samples using the central limit theorem each one should represent a random sample from the population or follow the population distribution. The samples size should also be less than ten percent of the entire population.

Simple random sampling refers to any sampling method that consists of a population with N. objects, the sample consists of n objects and if all the possible samples of n objects are equally likely to occur, the sampling method is called simple random sampling. This method allows researchers to use methods to analyze sample results. Confidence intervals are created that deviate from the sample mean to help model their situation.…… [read more]

Stat Notes Sampling Error and Standard Research Paper

… Stat Notes

Sampling error and standard error of the mean (SEM) measure the error in assuming the sample accurately represents the population; the smaller the error, the more closely the sample can be assumed to match the population.

Confidence intervals (CIs) are ranges in which a value might fall and are limited by the confidence level -- the higher the confidence level (the degree of certainty desired), the larger the confidence interval will be to ensure the data point falls within it.

Null hypothesis is always that there is no effect of an intervention/variable -- that measured groups do not differ significantly on the measured area(s).

Alternative hypothesis states that there is an effect of the measured intervention(s)/variable(s).

Probability Sampling is used to obtain a study sample that is representative of the population. This rarely occurs with fully randomized sampling; reducing sampling error is key in yielding reliable and valid results. Because of the relationship between sample size and the standard error of the mean (SEM), a larger sample size means a smaller error.

Statistical inference refers to both an estimation of parameters such as the mean and other basic summary statistics of a data set/population, and to hypothesis testing.

Interval estimation is an estimated confidence interval.

Hypothesis testing includes the objective means of determining if a null hypothesis should or…… [read more]

Math Anxiety Term Paper

… Given that this is the case, it is shown that performance in the math class necessarily means that there is greater pressure on the student to be correct that there is in other subjects of academic discourse.

Extensive research has been conducted into the topic of math anxiety in both psychological and physiological avenues. Researchers assert that "Math anxiety can bring about widespread, intergenerational discomfort with the subject, which could lead to anything from fewer students pursuing math and science careers to less public interest in financial markets" (Sparks 2011,-page 1). This is a very interesting perspective. If these findings are accurate, then the anxiety an individual feels might not only be impacted by their own histories with math, but with the experience that their parents or guardians had as well. Thinking about the issue, this actually makes a lot of sense. When a child does not understand his or her homework, then the child will go to an adult who they are close to for help with the material. If that adult also does not understand the material or if they react negatively to the topic, then that will influence that child, providing the youngster with another example of a person who responds to mathematics in the same way. This can be damaging to the relationship between the child and math at a potentially exponential rate.

During the interview, the math instructor I talked with gave me their opinion about what might be the basis for math anxiety. They believe that math anxiety is largely caused by lack of confidence. If a person has been unsuccessful with math throughout their childhood, then they will more than likely have negative opinions about their abilities in the subject once they have reached adulthood. Building of self-confidence, the instructor asserts, will help with the anxiety we feel when we are dealing with mathematics. There are ways in which this confidence can be rebuilt, such as reviewing of knowledge that a person already has, building confidence that they do in fact have mathematical knowledge. Another way is by seeking out help from teachers and classmates. Admitting that you are struggling in math is the first step to gaining the knowledge you need to be successful in the subject.

Many people experience math anxiety and it seems to be a symptom of a greater truth. Anxiety will ultimately beget further examples of anxiety. When a person struggles with something and continues to deal with the issue without overcoming those early struggles, then the problem becomes exacerbated. Not liking math or not succeeding in math as a young person becomes something of a self-fulfilling prophecy. If a child fails at math, then he or she goes into the next examination or the next math class fully expecting that they will fail again. They become so consumed with this idea that it winds up coming to pass. I did not understand the all consuming nature of math anxiety, but it seems that it could strike anyone… [read more]

Create and Analyze a Self-Designed Fictitious Act vs. SAT Scores of Low Income Students Research Paper

… Score Stats

A Statistical Analysis of ACT vs. SAT Scores of Low Income Students

Study Description

Apparent income disparities in standardized test scores have been noted in many previous studies, with the determination that the income level of a student's family -- along with other sociocultural factors -- has a major effect on their ability to achieve on standardized tests (Kohn, 2002). For tests like the ACT and the SAT, which are commonly (almost universally) used by colleges and universities in the United States as part of their admissions criteria and decision-making process, a gap in performance caused by income levels puts low-income students at a significant disadvantage for entry into four-year degree programs, which in turn limits earning potential and thus could in fact perpetuate lower income levels (Kohn, 2002). This study set out to determine if there is a significant difference in the ACT test scores of low-income students when compared to the SAT scores of the same student population, as a means of determining if test composition can mediate or make more pronounced any impacts of income standing on student performance. 30 students who completed both the ACT and the SAT tests and who matched income criteria of living at or below 150% of the poverty level were included in the study, with their total scores on both tests compared in order to determine if a significant difference exists. This would indicate that something in the test structures(s) worked to influence the impact that a low-income background has been observed to have on standardized test scores.

Statement of Hypothesis

The null hypothesis is that there will be no difference between the means. The alternative hypothesis, which is the hypothesis this study is investigating, is that there is a significant difference between the mean scores on the ACT and the SAT, indicating that test structure can determine the degree to which income impacts test scores.

Variable Description

Income level was one independent variable used to determine eligibility/inclusion for the study, with a family income of 150% of the defined poverty level the upper income limit. Subjects were randomly selected and were also polled for age and gender, though these variables were not analyzed further. Income level was not recorded past the point of inclusion, therefore figures for this data are not given; gender and age are given and a descriptive analysis was performed. Dependent variables of interest were test scores on the ACT and test scores on the SAT. These two sets of variables constitute the data points…… [read more]

Behavior Science Research a Researcher Research Paper

… An ordinal measure would ask the 130 individuals if they bought:

One to three vegetables each week

Three to five vegetables each week

More than five vegetables each week

A scale measure would ask the 130 individuals whether on a scale of one to five if they purchased (with 1 representing no vegetables and 5 representing many vegetables) how many vegetables they purchased each week.

7. In the fall of 2008, the U.S. stock market plummeted several times, which meant grave consequences for the world economy. A researcher might assess the economic effects this situation had by seeing how much money people saved in 2008. Those amounts could be compared to how much money people saved in more economically stable years. How might you calculate (or operationalize) economic implications at a national level?

The researcher would examine the amount of money people saved in 2008 and compare it to the amount saved in other years that were more economically stable and provide as a result a national average for the amount saved each year to be compared.

8. A researcher might be interested in evaluating how the physical and emotional distanceu a person had from Manhattan at the time of the 9/11 terrorist attacks relates to the accuracy for the event. Identify the independent variables and the dependent variable.

Physical distance and emotional distance are dependent variables and actual distance lived from Manhattan at the time of the 9/11 terrorist attacks is the independent variable.

9. Referencing Exercise 8, imagine that a physical distance is assessed as within 100 miles, or 100 miles or farther; also, imagine that emotional distance is assessed as knowing no one who was affected, knowing people who were affected but lived, and knowing someone who died in the events. How many levels do the independent variables have? The independent variable in this study has levels.

10. A study of effects of skin tone ( light, medium, and dark) on the severity of facial wrinkles in middle age might be of interest to cosmetic surgeons.

a. What is the independent variable in the study.

The independent variable in this study is the severity of facial wrinkles.

b. What is the dependent variable in the study.

The dependent variables in this study are light, medium, and dark skin.

c. How many levels does the independent variable have?

The independent variable has only one level and that being middle age.

11. Referring to Exercise 10, what might be the purpose of an outlier analysis in this case? What…… [read more]

Frequency Distribution Below Shows Research Paper

… frequency distribution below shows the distribution for suspended solid concentration (in ppm) in river water of 50 different waters collected in September 2011.

Concentration (ppm)


What percentage of the rivers had suspended solid concentration greater than or equal to 70?

Total samples (N) =50. (7+2+2)/50=0.22. 22% have a concentration of 70 or greater.

Calculate the mean of this frequency distribution.

Midpoint for each concentration group is determined and multiplied by frequency; results summed and divided by N (50). Mean = 57.1

In what class interval must the median lie? Explain your answer. (You don't have to find the median)

The median must lie in the 50-59 interval as this is where the middle data points (25 and 26) would fall in this data set of 50 points (there are 17 point before this group and 23 after; though on the lower end, this group contains the median data points).

Assume that the smallest observation in this dataset is 20. Suppose this observation were incorrectly recorded as 2 instead of 20. Will the mean increase, decrease, or remain the same? Will the median increase, decrease or remain the same? Explain.

The mean of the raw data set -- that is, not of the frequency distribution -- would change based on this incorrect record somewhat significantly. The mean of the frequency distribution would change very slightly if the one observation in the 20-29 category simply disappeared, and even more slightly if the lowest group was altered to include this point (the midpoint used to estimate the mean would drop to 14.5). The median would remain the same, however, as moving the lowest data point would not change the order of the data or the position/identity of the central data point(s).

Refer to the following information for Questions 5 and 6.

A coin is tossed 4 times. Let a be the event that the first toss is heads. Let B. be the event that the third toss is heads.

5. What is the probability that the third toss is heads, given that the first toss is heads?

If it is already given that the first toss is heads, there is a 0.5 probability that the third toss will be heads -- the same as for any standard coin toss. Though the overall probability of both a and B. occurring is 0.25 (0.5*0.5), knowing a has already occurred gives B. its natural and independent probability.

6. Are a and B. independent? Why or why not? Each coin toss is independent as it is not influenced by previous tosses -- a previous heads does not actually change the coin…… [read more]

Inferential Because it Makes Claims A-Level Coursework

… ¶ … inferential because it makes claims about the population of adult Americans based on a sample of 9000 persons. The use of a sample proportion to estimate the population proportion makes this study inferential (Gravetter & Wallnau, 2008).

The research question in the study was; does the lack of health care increase the risk of death?

The data were obtained using a survey of 9000 persons tracked by the U.S. Centers for Disease Control and Prevention. While it is not explicitly stated in the article that the data was collected using a survey the tracking of individuals could not be done using an experimental design. Additionally, the purpose of the study suggested that it would engage in a correlational approach to explicating the problem.

The exclusion of persons aged 65 and over is an attempt to eliminate bias, as the inclusion of these persons would create a systematized form of error within the study. These older Americans receive health care through Medicare.

5. The conclusions drawn from the article are warranted because they are logical. Firstly persons who do not access medical attention will die from aliments that require medical attention. Secondly, the design of the study and the sample used are representative of the country and it would be legitimate to use such a sample. Finally, the design of the study followed a similar study done in 1993, therefore there is methodological support for the approach employed.

6. A large trial is necessary to ensure that the sample would be representative of the population. This representativeness means that the sample is similar to the population in key characteristics and the is little difference between the sample proportions and the population proportions. The error in the sample is therefore small.

7. A control group is needed to ensure that non-spuriousness is address adequately in the study. Using a control group means that the researcher is certain that the independent variable has the stated effect on the dependent (Lenth, 2001).

8. The double blind feature takes care of the propensity of human error to seep into the study. When both the participants and the researcher is unaware of which group is the treatment group, other variables that can have an effect on the study are controlled for.

9. The use of volunteers would have biased the results as that would have introduced systematic error into the study. The generalizability and validity of the study would be called into question (Creswell 1994). It is only through randomization that random error can be statistically determined.

Chapter 2









A random sample is similar to a convenience sample and a systematic sample only as they select member of a population for investigation. All methods of sampling will contain error. It is different because with a random sample there is…… [read more]

Z Test in Psychology Term Paper

… Entering the provided values gives: (75-70)/?[(12/?36) + (12/?36)] = 5/2 = 2.5 = z.

Step 4: Probability Calculation

What is being tested is whether the students' attitudes towards the mentally ill change as a result of viewing the film, thus it does not matter if the students' attitudes are better or worse, only if they changed significantly from students who did not view the film. Since the attitudes could be worse or better, this is a two-tailed test. If the Z score falls within 95% of control scores, then we have to retain the null hypothesis and reject the alternative hypothesis.

If the scores fall outside of 95% of control scores, within the 2.5% of the extreme tail of the normal distribution at either end (two-tailed), then we have to reject the null hypothesis and retain the alternative hypothesis. Based on the Z score table, a Z score of 1.96 or greater would be needed to obtain a probability value below the alpha of 0.05, two-tailed.

Step 5: Conclusions

The Z score obtained by comparing the two means was 2.5. For these two means to be significantly different, using an alpha of 0.05, the Z score would have had to be 1.96 or greater. Therefore, since 2.5 >…… [read more]

Five Process Standards Term Paper

… ¶ … Standards

Five process standards

Describe the mathematical process standards

Problem solving

Engaging in a task without knowing the solution method in advance is what is referred to as problem solving. Drawing from their knowledge, the students are better equipped to find a solution for the problem, and while doing this the students will develop a new understanding of mathematics. The students are also able to solve any other problems they encounter both in mathematics and other life situations using their problem solving skills for example, "I have pennies, dimes, and nickels in my pocket. If I take three coins out of my pocket, how much money could I have taken?" Mathematics, 2000()

Problem solving involves the application and adaptation of various strategies to assist the student in solving problems.

Reasoning and proof

To gain better understanding on a wide range of phenomena, a student will need to have a strong mathematical reasoning and proof. Thinking and reasoning analytically allows a person to identify structures, patterns, and regularities in symbolic objects and real world situations. To better understand mathematics, a student needs to be able to reason. A good example is "Write down your age. Add 5. Multiply the number you just got by 2. Add 10 to this number. Multiply this number by 5. Tell me the result. I can tell you your age." Mathematics, 2000()

Students are able to better evaluate and develop their own mathematical arguments by employing reasoning and proof.


For the teaching of mathematics, communication is an integral part. It provides an avenue for the students and lecturers to share ideas, and make clarification where necessary. Challenging students to communicate their mathematical results and reasoning will help them learn to justify themselves in front of others, which leads to better mathematical understanding. Working on mathematical problems with others and having discussions will allow students to gain more perspectives when solving mathematical problems e.g. "There are some rabbits and some hutches. If one rabbit is put in each hutch, one rabbit will be left without a place. If two rabbits are put in each hutch, one hutch will remain empty. How many rabbits and how many hutches are there?" Mathematics, 2000()


A students understanding is deepened when they are able to connect mathematical ideas. By continuously developing and teaching students' new mathematics that are connected to what the students had learnt previously, the students are able to make connections. Learning mathematics by working on problems that arise from outside mathematics should be incorporated into the curriculum. These connections will give the students an opportunity to connect what they learn in relation to other subjects or disciplines. Mathematics is connected to many other subjects, and it is very important that students get to experience mathematics in context.


Proper and easy representation of mathematical ideas assists people to better understand and use these ideas. For example, it is very difficult to do multiplication using roman numerals than it is to use Arabic base-ten Mathematics,… [read more]

Operations Essay

… d.).

As the first step, solving an equation requires combining like terms for the two expressions within the equation. In this case, like terms are those containing the same variable or group of variables that are raised to the same exponent despite of their numerical co-efficient. The second step is to isolate terms that contain the variable, which means getting terms containing that variable on one side of the equation while the other variables and constants are moved on the opposite side of the equation.

This is followed by isolating the variable to solve for that can result in obtaining a numerical coefficient. When a numerical coefficient of one is obtained following isolating the terms containing the variable to solve for, the variable was automatically isolated. The fourth step for solving an equation is substituting the answer into the original question in order to ensure that the answer is correct. In this case, substitution is a process of swapping variables with expressions or numbers as part of checking the answer to ensure it is correct. When solving an equation and explaining how to solve an equation, the most important factor to consider are the variables in the equation. This is primary because the variables in the equation play an important role in determining the accuracy of the process. The variables should also be critically considered because they help to determine whether the right or incorrect answer will be obtained.

Four Steps for Solving a Problem:

In most cases, mathematical problems usually require established procedures as well as knowing the procedures and when to apply them. Moreover, the process of learning to solve a mathematical problem is generally knowing what to search for. In order to identify the necessary procedures for solving an equation, an individual needs to be familiar with the problem situation, gather the appropriate information, and identify and use the strategy appropriately. While there are various steps for solving a problem in mathematics, effective problem solving requires more practice (Russell, n.d.).

The first step for solving a problem is looking at the clues through reading the problem carefully and underlining the clue words or phrases. When looking for clues, it may be important to examine if the person has encountered a similar problem in the past and what was done in that situation. The second step is defining the game plan, which involves developing strategies for solving the problem. During this process, the various strategies developed can be tried out in order to identify the effective one.

The third step in the process is to solve the problem suing the already identified strategy in the second step. The appropriate strategy that is used to solve the problem is normally identified by trying out various strategies. The fourth step is reflecting on the solution to the problem to examine whether it's probable, it solved the problem appropriately, and it answered the problem using the question in the language.

When solving a problem or explaining how to solve a problem,… [read more]

Guess and Check Essay

… A popular problem solving strategy that an increasing number of students encounter before middle school is model drawing, sometimes taught as "Singapore Math." The Singapore method, named for the Asian nation in which it was developed, teaches students how to create visuals in a systematic way to assist in solving word problems. When students learn this method and have sufficient opportunities to practice, it can go a long way toward preparing them for the guess and check strategy. The Singapore method asks students to really examine the relationship between values in a problem and carefully consider the question being asked with respect to the solution.

There is a greater language base in today's mathematics programs. Prospective teachers should be reminded to discuss problem solving with their students, emphasizing the process and eschewing exclusive focus on "the right answer." Obviously, solving problems correctly is the goal. Students cannot get credit for wrong answers on standardized tests. More importantly, if students do not understand why they got a wrong answer, they have little hope of solving similar problems successfully in the future. Guess and check enables students to thoughtfully work through problems and gain understanding of mathematical relationships. It is a strategy that can help foster success. Success tends to beget further success; students who are able to solve problems build confidence in their ability to do so. They feel good about themselves and good about their mathematics classes. They learn that math does not have to be intimidating. There are thoughtful, logical ways to approach problem solving. It is a good lesson for mathematics as well as for other academic content areas.


Guerrero, S.M. (2010). The value of guess and check. Mathematics Teaching in the Middle

School…… [read more]

Nursing Research Analyzing Qualitative Data Essay

… Statistics and Quantitative Analysis Design

Inferential statistics are based on the laws of probability and allow inferences to be drawn about a population based on a sampling of that population. Three applications for inferential statistics are: the sampling distribution of the mean; estimating parameters; testing hypotheses. The Sampling Distribution of the Mean employs an infinite number of samples from a selected population and theoretically distributes the means of those samples. Estimating Parameters consists of defining and establishing a framework for the target population from statistical samples (Polit & Beck, 2008, pp. 583-584). Finally, hypotheses are tested with objective criteria provided by data to infer whether the hypotheses are sufficiently supported by the evidence (Polit & Beck, 2008, p. 587).

Multivariate Statistics is an area of statistics concerned with the collection, analysis and interpretation of several statistical variables at once. While statistics may be artificially confined for convenience sake, health care actually involves complex relationships of variables for patients themselves, within a single health care institution, within a group of health care institutions, and within the entire health care system. Multivariate statistics observes and analyzes several of these variables at once using several types of tests for various purposes.

Multivariate Statistics analysis is integrated in quantitative analysis through a number of tests to compare a number of variables in complex relationships. Tests used in multivariate statistics include: multiple regression/correlation tests, used to understand the effects of at least 2 independent variables on one continuous dependent variable (Polit & Beck, 2008, p. 614); analysis of covariance (ANCOVA), which compares the means of at least two groups with a single central question (Polit & Beck, 2008, p. 624); multivariate analysis of covariance (MANCOVA), which involves controlling covariates -- or extraneous variables -- when the analysis involves at least two dependent variables (Polit & Beck, 2008, p. 627); discriminant function analysis, which involves using a known group to predict an unknown group with independent variables (Polit & Beck, 2008, p. 628); canonical correlation, which involves testing one or more relationships between two sets of variables (Polit & Beck, 2008, p. 638); logistic regression, which predicts the probability of an outcome based on an odds ratio (Polit & Beck, 2008, p. 640).

Inferential Statistics assists in… [read more]

Person Hired a Firm to Build Essay

… ¶ … Person hired a firm to build a CB radio tower. The firm charges $100 for labor for the first 10 feet. After that, the cost of the labor for each succeeding 10 feet is $25 more than the proceeding 100 feet. That is, the next 10 feet will cost $125; the next 10 feet will cost $150, etc. How much will it cost to build a 90-foot tower?

We see that there is a new price for every ten feet of tower. Each new price is $25 added to the previous price. Since repeated addition is involved, this is an arithmetic sequence. First, we need to identify the following numbers:

n = number of terms n = 9

d = the common difference

al = first term al = 100

an = last term an = a9

We know n = 9 because the tower increases in increments of ten feet, and the final height is 90 feet. 90/10=

To find the nth term of an arithmetic sequence, Page 271 of Mathematics in Our World gives us the following formula:

an = a1 + (n-1)d a9 = 100 + (9-1)

a9= 100 +8(25)

a9= 100 + 200

a9= 300…… [read more]

Correlation and Causation Understanding Essay

… Correlation and Causation

Understanding correlation

Within any population the variables that concern a researcher will hold different values. This difference in value for any variable becomes the basis of different types of analysis, which go beyond simply counting categories of the phenomenon. This type of analysis engages the use of variation to make statements about the nature of the relationship between variables. One of the ways to measure the association between two variables is the use of correlation. Correlation is consequently a useful tool that provides a quantitative measure of the presumed relationship between two or more variables.

Correlation therefore is a statistical technique that provides a numerical or quantitative assessment of the degree to which two variable co-vary. The idea of association is tied to the concept of co-variation. Co-variation occurs when two variables change values. This changing of values is a conceptual association that exists as a consequence of the way in which we try to make sense of the world. Within the mind of the observer it is possible to consider that the presence of y is linked to the presence of y. This linking is as a consequence of observing instances of x and seeing instances of y existing within close proximity to y. One may observe that changes in the diet may result in the loss or gaining of weight. This observation forms the basis of common understandings about the relationship between things. What scientist have attempted to do is to measure the strength of that relationship, thus providing a number that can be compared to other numbers to indicate different features of the observed relationship.

The main way to represent a correlation is to use the correlation coefficient (r). The correlation coefficient is the product of a series of statistical calculations that are produced when either the Pearson's Product Moment Correlation or the Spearman Rho is computed. The correlation coefficient ranges in value from -1.0 to + 1.0. The larger the size of the correlation coefficient that is, (tending toward 1 or -1) the stronger the relationship between the variables being tested. Moderate correlations are understood to begin at around 0.6 and weak correlations around 0.4 these values may be positive or negative. If the correlation coefficient is 0 then that suggests there is no relationship between the variables being tested.

The positive and negative signs are very important in interpreting the correlation between two variables. While the number tells the magnitude or size of the correlation the sign before the number indicates the direction of the correlation. The direction of the correlation can be positive or negative. These directions are also known as a direct correlation and an inverse correlation (Cooper & Schindler 2011). With a direct correlation the values of both variables increase together. Consequently as the number of calories that an individual ingests increases their weight may also increase. The relationship that has been describe is a positive correlation, where as one variable increases the other decreases. In an indirect correlation… [read more]

Normal Distribution Curve Essay

… This equality is a constituent of the normal curve and something that makes it helpful to psychology in that different measurements can be based on the normal curve and applied to varying situations.

The Z score, for instance, are raw scores that are converted to units of standard deviation. These, because of the nature of the normal distribution, can be converted to percentiles or to other scores of measurement if necessary.

The normal distribution is the shape that happens to occur the most often in describing a population, and it is lucky that it does because it can be precisely quantified by mathematical equations. IQ scores are an ideal example of a normal distribution where you have the greatest frequency (or mean) in the middle with frequencies tapering off on either side.

Because it occurs often and because the shape is mathematically guaranteed, parametric (i.e. those based upon a normal distribution) studies and statistical tools are often more reliable than are non-parametric.

The normal distribution, finally, is also important in statistics since not only does it state that under certain conditions (mild and commonly the most frequent), the sum of a large number of random variables is distributed normally, but it is also a convenient choice for modeling a large variety of random variables that are generally encountered.

Moreover, of all distributions the normal distribution is the only absolutely continuous distribution who cumulates other than the mean and standard deviation are zero.


Cassela, G. & Berger, R. (2001). Statistical inference. UK: Duxbury.

Gravetter, F,. & Wallnau, L. (2007). Essentials of statistics for the behavioral sciences. Thamson Wadsworth, USA.

Weinbach, RW, & Grinnel, RM. (1991).…… [read more]

How Math Explains the World Term Paper

… ¶ … Math Explains the World

The title of James Stein's book, How Math Explains the World, is, perhaps, a bit deceptive. The reader who is expecting simplified explanations of complex mathematical principles will be disappointed. Although Stein has simplified… [read more]

Patient Perceptions of Maternal HIV Case Study

… For each patient in this study X and Y were known, but the researchers wanted to establish a straight line through the data that minimizes the Sum of the Squares of the vertical distances on a graph of the various points from the line that dissects the points.

Study bias. Participating patients self-selected to complete surveys, and not all survey respondents may have understood the terminology used in the survey in the same way.

Summary of Table 4. The recollection of patients regarding their physicians' practices are shown in Table 4, along with the responses of the physicians. Physician responses reflect their practice standards for recommending testing to women with exhibiting certain attributes or life situations, and also two specific questions that the physicians ask their patients. The responses of patients with regard to various questions are shown categorically for pregnant and non-pregnant women.

Chi-Square Test -- Race and Recall. Race was not found to be a strong predictor but a test did indicate that a patient's race is associated with her report that she had an HIV test. White, non-Hispanic and Asian women were significantly less likely to report having been tested for HIV than were African-American or Hispanic women. In this form, X2 (3), X2 represents the Chi-Square statistic and the "3" stands for degrees of freedom, while 17.3 is the Chi-Square value. The p value is a measure of how much evidence there is against the null hypothesis. In other words, it indicates the probability of getting a result as extreme as that one obtained. A small p value indicates that the null hypothesis can be rejected, with an understanding that there is still a possibility of making an error. P < 0.01 is the probability that the null hypothesis (of no differences between the groups) is true. Chi-Square is a non-parametric test, and though it can indicate if two groups are similar, it cannot tell the nature of the similarity.

Limiting factor. The researchers noted that physicians were used to distribute, collect, and return the patient surveys to the principle investigators and, as such, there was not a way to introduce random selection of the patient sample. Also, the patients in the study are associated with only 68 physicians so generalization may be…… [read more]

Structured Analysis of an Experimental Research Paper

… When using sample variances to estimate the overall variance of a population, it is very important to avoid biasing the estimation by using (n-1) for the sample size in the variance formula, instead of the actual sample size n. Without this sample size correction, the computed sample variance would become an incorrect or biased estimate of the population variance.

In the 2004 study by Buller et al., the dispersion measures of variance and standard deviation were not of primary interest to the researchers in themselves, however the confidence intervals for their calculated results were paramount. Computing valid confidence intervals (CI) relies upon firstly the establishment that data are normally distributed, and secondly having available either the mean or standard deviation value to compute the CI. Therefore, the internal computations of mean and standard deviation from the large sample size were key to the results of this study. The range parameter was of incidental interest to the researchers, and was implied by the bounds of the categorical ranges they defined for each of their various tests. As noted by the researchers, "the large sample size allowed outcome assessment in patients with a broad range of body weights and renal function." 1

A standard normal distribution is a formal construct, defined as a normal distribution having a mean of zero (0), and a standard deviation of one (1). The area under the standard normal distribution curve represents the proportion or number of observations in the sample being analyzed, and their distance relative to the mean (represented by the center line of the graph), measured using the distance of each observation from the mean, measured using the positive or negative number of standard deviations of the observation relative to the mean. If sample is observed to have a normal distribution, this means that it will have characteristics similar to a standard normal distribution, and it therefore becomes possible to use familiar tools to compute the probabilities of selected outcomes, or proportions of value ranges.

In the 2004 study by Buller et al., the majority of the data gathered was categorical in nature, and were used to classify trial results as either significant or not significant for each of a large number of specific symptomatic tests. The essence of the comparisons relied upon the techniques of hypothesis testing and confidence intervals to validate whether the effect of each drug was significant for each of the symptomatic tests, and then computing the relative significance to compare performance of the drugs. The experimental result data were entered into a statistical analysis tool (SAS), which established the necessary preliminary criteria that the data conformed to a normal distribution, enabling the researches to employ the standard statistical tools.

The 2004 study by Buller et al. demonstrates the characteristics of a well-designed and appropriate statistical analysis. The researchers made a conscious effort to use very large sample sizes for each of the medication trials (n 1100), and they established a standard method of hypothesis testing with 95% confidence intervals… [read more]

Pre-Calc Trigonometry Journal

… Modeling Real-World Data with Sinusoidal Functions

The sinusoid which is sometimes referred to as the sine wave referrers to a function of mathematics describing a smooth oscillation that is also repetitive. It usually takes place in pure mathematics and also… [read more]

Bayesian Methods for Data Analysis in Transcription Networks Term Paper

… ¶ … Bayesian method refers to methods on probability and statistics particularly those related to the degree of belief interpretation of probability as opposed to frequency interpretation. In Bayesian statistics, a probability is assigned to a statement, whereas under frequency… [read more]

Representation in Algebra: A Problem Solving Approach Essay

… Representation in Algebra: A Problem Solving Approach

The need for a solid background in mathematics for high school and college students in the 21st century is well documented (Katz & Barton 2007). A number of emerging career fields in the… [read more]

Testing a Critical Element of Research Essay

… ¶ … Testing

A critical element of research is determining whether a set of observations are the product of chance or the result of the action on the independent variable on the dependent variable. This approach to knowledge creation and verification engages deductive reasoning in the decision making process. Thus, the essential feature of this process is making the correct decision with reference to what the data are pointing to. To make this process more reliable and valid researcher engage a type of thinking and analysis involving statistical testing called hypothesis testing.

A hypothesis is a conjectured statement or predicted statement of relationship between two variables or more variables. The researcher uses the hypothesis to represent the relationship he anticipates will explain some aspect of the variance in the dependent variable. For the purposes of hypothesis, testing the researcher will usually have a null hypothesis and an alternate hypothesis. The null hypothesis is the hypothesis that is actually tested. It is either accepted or rejected by the researcher based on the test results (Ryan 2004). To accomplish this successfully requires that some decisions be made as to when the relationship exists or does not.

The result of the hypothesis test is considered significant if it is highly unlikely that it could be the product of chance. This is determined using a specific threshold for the rejection of the null hypothesis based on a specific significance level. The significance level for the rejection of the null hypothesis is determined before the test in undertaken.

There are two errors that can be made in hypothesis testing, and they center on the rejection of the null hypothesis. The researcher can reject the null hypothesis when the null hypothesis is true. This type of error is called a type I error and the probability of making this type of error called the alpha level. It therefore stands to reason that the lower the probability that is set for the alpha level, then there is a smaller chance of making a type I error. This also means that a more extreme test value is required to have the result be considered as significant. The other type of error that can be made in hypothesis testing is called the type II error. This is the reverse of the Type I error. As the researcher seeks to ensure that the result is not the product of chance, you increase the possibility of not rejecting the null when the null is actually false. So that the type II error leads, the researcher to determine that there is no effect when there actually was an effect.

Aron, Coups & Aron (2011) identify a five-step procedure for successful hypothesis testing. The first step involves restating the research question as a "research hypothesis and a null hypothesis about the populations" in general the null hypothesis is the opposite of the research hypothesis and states that there is no change, no difference or no effect. Secondly, the characteristics of the comparison distribution… [read more]

Codes Were Labeled Thus for Data Analysis A-Level Coursework

… ¶ … codes were labeled thus for data analysis. Thus the category of participants who found my work to be highly interesting was coded 1, those who found it somewhat interesting were coded 2; those who found it tedious / boring, I coded 3; whilst those who found it highly disinteresting, I coded 4. I should have coded those whose data was missing (e.g. illegible, absent, or had not responded). 1,673 out of 2,715 participants had failed to complete their surveys (there was partially or utterly incomplete missing data). This made for 61.6% of the survey. The cases that I could, ultimately, rely on were 1,042.

Absolute frequency denotes the number of percentage of those who had responded. For instance 650 participants had responded that they were highly interested. This tells us that my presentation (assuming it was that) must have been highly interesting, or I am popular, or perhaps the participants were in some way or other influenced to vote on my behalf, for almost half of the population had found me highly interesting, altogether the overwhelming majority were at least positively inclined towards my presentation whilst only 89 participants ranged to being somewhat bored to being utterly bored. I might conclude that my presentation was successful. The relative frequency is my converting the absolute frequency into an approximate percentage. So, for instance, I calculate the 650 respondents of Code 1 from the total 2715 (650 / 2,715) or gain my relative frequency. The adjusted frequency allows me to compare the response across the various spectrums (to compare the response of the whole). So, for instance, I see that the majority of participants (62.4%) were highly interested in my work, whilst the lowest group of all (2.7%) loathed it. The cumulative frequency is the number of individuals as you move up the scale. For instance 62.4% of participants (i.e. 650 individuals) responded that they were highly interested, by the time you progress to the category of those who were disinterested you receive 100% cumulative frequency.














(b) the genders are equally balanced in their preference for coke (n=7)

2. Mean: all the scores were added and then divided by the number of scores (i.e. 30 in this case).

Standard error: The sample is never a perfectly accurate reflection of the population. There will always be some error between sample and population and the S.E. measures the average difference that should be expected between one and the other. In this case, the S.E. is low.

Median: List the score in order lowest to highest; the median is the middle score in the list (the 50th percentile). Here 16 is the median.

Mode: the most frequently reported score. # 16.

Sample variance: A sample is always biased to its population (different than it) in some way. Sample variance is measuring the extent to which it differs. In this case it verges 6.21% away from norm.

Sample deviation: The square… [read more]

Direction Attach Term Paper

… ¶ … prioritized items that I would take with me were I going away on a long tri

Lengthy letters, with photographs attached, from people who are emotionally closest to me, with these letters also containing personal memories, assurances of their love, and advice and encouragement for the future.

My laptop -- fully upgraded to the most secure and most functional program then in existence. It should have the basic capabilities, and ability to access and listen to a wide range of music and movies. I would ascertain, too, that I have long-term subscription to the major academic databases that interest me.

A thorough compendium of global philosophy, from past to present, as well as theories from the social sciences, in particular from sociology.

One of Winget's books, likely "People are idiots and I can prove it" (2009)

An introductory textbook to a wide-ranging spectrum of mathematical and logical disciplines. The one I have in mind is called, "A survey of mathematics with applications" (Angel & Porter, 1997)

Part B. Description of Item

1. The letters: I would request those sentimentally closest to me that they describe their feelings towards me in as best a manner as they can, that they describe events that have happened between us that have positively impacted them, and I would conclude with a request for their impressions of my strongest and weakest points. I would ask them to attach their photographs, and an addendum of specific encouragement and/or advice for the future.

2. The laptop -- I might switch to MEPIS, a program I have read that is more secure and reliable than Windows. I would ascertain that it is virus-free with a rapid Internet connection. I would also sign up for long-term subscription with pertinent online Academic databases; ensure that I have access to reliable music and DVD capabilities and, take along a starters' base of several of my best-loved music CD and DVDs (possibly although not necessarily the latter). I would ensure that I have all computer paraphernalia along with a large supply of printing paper, and several empty notebooks as well as a large supply of pens.

3. "People are idiots and I can prove it" (2009): In a down-to-earth, acerbic tone, Winget shows you exactly where and what you are -- he cuts through all the delusions -- whilst in an unusually commonsensical way he shows you how to see the mess in your life for what it is, and how to straighten it out. This is no self-help book; this is a self-'do' book.

4. The compendium: universal in approach, authoritative and comprehensive, an encyclopedia written in a scholarly manner covering every single…… [read more]

Statistics in Social Work Research Paper

… Back end testing of additional questions utilizing data from the sample, and test type, performs 'audit' of sorts on the research. While not typically necessary, as outlandish findings are invariably obvious to professionals whom have been working at practice, in… [read more]

NCTM Process Standards Essay

… ¶ … NCTM Process Standards


In my class, problem-solving activities were integrated into every learning unit. Some of the methods deployed included learning how to use fractions in a hands-on fashion. As well as doing standard fraction-related problems on paper, students were asked to make visual representations of fractions and use them to solve word problems.

Learning how to make unit conversions was one of the most useful skills learned by the students. Students were given problems similar to those they might cope with in daily life, such as converting standard measurements to the metric system and vice versa. Students also were given the task of painting an imaginary room, and were asked to scale 'up' the amount of paint it would take to cover the surface area, based upon the previous amount used for the smaller, similarly-shaped room.

Students were given problems involving distance, rate, and time. All of these were intended to show the applications of problem-solving activities in math in 'real life' and teach students that understanding math required more than merely manipulating equations.

Reasoning & proof

For all problems worked on in class or at home, students were required to show how they arrived at their answers. It was not enough to simply get the right answer -- the process had to be demonstrated correctly. Focusing on the process of solving a problem over getting the right answer was stressed, contrary to how mathematics is usually taught. Using a process-based teaching strategy underlines the fact that there are different, but equally valid ways of arriving at the same answer for a problem, although some methods are more efficient.

Depending on the learning orientation of the student (verbal, visual, spatial, or kinesthetic) some activities proved more effective for certain members of the class than others, so a variety of strategies were used to teach a single concept. For example, one kinesthetic activity entitled "Walk down the line" required the students…… [read more]

Online Field Trip Comprised of Visits Essay

… Online field trip comprised of visits to five online locations. Each website was related to the teaching and understanding of mathematics. The contents of the sites included information for teachers, parents and students. Some sites were concerned with the testing of particular skills other focused on providing relevant information to interested persons. The following paragraphs will provide a brief summary of the specific websites.

The first site visited was titled "Illumination: Resources for Teaching Math." This website was well organized and vivid. There were links for activities, lessons, standards and other online math resources. The home page of the site provided easy access to some of the more relevant resources on the site. The material on this site is designed to make, the teaching of math fun and enjoyable in the classroom.

Following the "Illumination" site, the next site visited was the "National Council of Teachers of Mathematics" (NCTM) website. This website was abuzz with a multitude of links and a wealth of information specifically for teachers. The resources and articles on this site focused on teacher development. From conferences to job opportunities, the professional development of the teacher was central to this site.

The website "A Math Dictionary for Kids" by Jenny Eather employed bold, bright attention grabbing colors. The website provided definitions to mathematical concepts at a level that children could easily grasp. Selecting a…… [read more]

Curriculum Design Essay

… Curriculum Design

Mathematics -- Trigonometry


Spiritual Principle: "And he [Hiram] made a molten sea, ten cubits from the one rim to the other it was round all about, and...a line of thirty cubits did compass it round about....And it was an hand breadth thick...." -- First Kings, chapter 7, verses 23 and 26

This refers to the importance of studying exact measurements (see note) and utilizing mathematical principles to perform accurate calculations.

Students will apply the basic principles of trigonometry to investigate and explain the characteristics of rational functions. Students will apply these basic principles to understand why trigonometric measurement is necessary based on the limits of geometrical measurement. Students will understand basic principles of ratio, sine, cosine, and tangent. Students will be able to explain how trigonometry might be used in their daily lives. (Why is Trigonometry Important?, 2001).

Suggested Activities and Experiences-

1. Introduction to Trigonometric Principles -- To find relevancy, students need to see why trigonometry was invented and what questions it can answer. In addition, there are different problem solving skills necessary when dealing with trigonometric functions. Break students into groups so that there are at least 4 separate groups. Here is the problem:

You are in a group which is to abseil down a rock face tomorrow. Your task is to estimate the height of the face. You have no measuring instruments. You need to determine the height to know how much rope to take. You cannot take excess rope as you are at the start of a four day exercise and you must not have extra weight with you. Tomorrow morning you will walk the track which will take you to the top of the rock face.

Questions on board to read: What…… [read more]

Objective Map Essay

… Mathematics

Grade 9

H.S School Curriculum

Lynchburg, VA

Spiritual Principle

To everything there is a season, and a time to every purpose under heaven (KJV Ecclesiastics 3:1-8)

Standard 1: Students will be able to explain the principles of, graph, and solve step and piecewise functions.

They will be able to convert absolute into piecewise functions.

Standard 2: Students will be able to graph and solve exponential functions and use them to model and predict real life scenarios.

Standard 3: Students will be able to solve quadratic equations and inequalities in one variable. Students will be able to determine and graph the inverses of linear, quadratic and power functions, including restricted domains

Suggested Activities and Experiences


Students will list the types of real world experiences that must be measured in terms of functions or rates of change over time, like changes in distance, temperature, and amounts of interest.

Students will find real world examples of piecewise functions in the newspaper and online, such as the rates of change of distance and speed, cell phone plans, and the value of buying in bulk and then graph these scenarios while in class (McClain & Rieves 2010, p.12).

3. The class will be divided into two halves and given transparencies and markers: one half will graph a linear function, the other half a quadratic function. After graphing both on transparences, students will lay the graphs together and see if the final, combined graph demonstrates or is a new type of function (McClain & Rieves 2010, p.11).

Standard 2:

1. Students will use the principles of compound interest to solve real-world investment goals. For example, a student might ask how he or she can save a specific amount of money within a defined time period to meet a life goal. If he or she has the opportunity to invest in a financial instrument yielding a particular amount of…… [read more]

Students at the End of This Grade Term Paper

… Students at the end of this grade level must be able to investigate and solve step and piecewise functions. This means that they must be able to write absolute value functions as piecewise functions. Piecewise functions that students must be able to explain include domain, range, vertex, axis of symmetry, intercepts, extrema, and points of discontinuity. Students must show the ability to solve absolute value equations and inequalities analytically and graphically.

Standard 2: Students must be able to explore exponential functions. This includes the ability to extend properties of exponents to include all integer exponents. They must also be able to solve exponential equations and inequalities of relative simplicity both analytically and graphically. Students must demonstrate an understanding and ability to use basic exponential functions to model reality.

Standard 3: Students must be able to solve quadratic equations and inequalities in one variable. This involves finding real and complex solutions to mathematical equations by means of processes such as factoring, square roots, and the application of the quadratic formula. They must be able to analyze the nature of roots by means of technology and the discriminant. They must be able to describe their solutions by means of linear inequalities.

Standard 4: Students must be able to explore inverses of functions. This includes a discussion of functions and their inverses, by means of concepts such as one-to-oneness, domain, and range. Students must demonstrate an ability to determine the inverses of linear, quadratic and power functions, including restricted domains. They must also be familiar with the use of graphs to determine functions and their inverses. Composition must be used to verify the relationship between functions and their inverses.

Standards for Grade Level 10

Standard 1: Students must be able to analyze a higher degree of polynomial function graphs. This means that they must be able to graph simple polynomial functions and understand the effects of elements such as degree, lead coefficient, and multiplicity of real zeros on the graph. Students must also be able to determine the symmetry of polynomial functions in terms of their nature as even, odd, or neither. They must also demonstrate an ability to explain polynomial functions by referring to elements such as domain and range, intercepts, zeros, relative and absolute extreme, and end behavior.

Standard 2: Students at the end of this grade level must show an ability to explore and understand logarithmic functions as inverses of exponential functions. This includes the definition and understanding of nth root functions, as well as extending the properties of exponents to include rational exponents. Students must be able to extend the laws of exponents in order to understand and use the properties of logarithms.

Standard 3: Students must be able to penetrate various equations and inequalities by finding real and complex roots of higher degree polynomial equations. They must demonstrate an ability…… [read more]

Statistics Teaching Measures Essay

… Teaching Measures of Central Tendency

This paper provides a descriptive narration of Measures of Central Tendency (the mean, the median, the mode, the weighted mean and the distribution shapes) with solved examples to illustrate these measures. As the paper describes, measures of central tendency is a category of descriptive analysis, which uses a single value to describe the central representation of any dataset and thus a useful tool in analysis. Due to the disparities that happen to be in different data sets, the mean or the average by itself may not provide the needed information about the distribution of the data. Therefore, the different measures of central tendency give adequate information concerning the distribution of any data set thus important to understand them.

Teaching Measures of Central Tendency

Measures of central tendency is one of the two categories of descriptive statistics that uses a single value as a central representation of a data set and it is important in statistical analysis as it represents a large set of data using only one value. From this category of description, several methods apply to represent this central part. Among the measures includes mean, median, mode, weighted mean and description shapes. The methods of analysis are crucial in statistical analysis as they give information of any data set of interest.

First, we examine the mean as measure of central tendency. Being the commonly utilized measure, it takes another name as average and it involves calculation of summing up all values in a selected population and then dividing the total sum by the involved number of observations. Depending on the desired mean, sample mean, or population mean, the resulting formula can differ slightly. All the same, the result is a central representation of a data set. For instance, if a data set constitutes the following 5 observations, 2, 7, 4, 9,and 3, then the mean will be obtained by summing up all observations (2 + 7 + 4 + 9 + 3) to obtain a cumulative sum of 25, then dividing this result with the number of observations (Mean = 25/5 = 5). Therefore, the mean of the five observations is equal to five (Donnelly, 2004, p. 46).

The next measure is the weighted mean. Unlike the normal mean or average, which allocates equal weight to all values of the observation, weighted mean gives the flexibility to allocate more weight on certain values of the observation compared to others. For example, considering the scores of a student in three exams, that constitutes the exam having a 50% weight, practical contributing 30% weight while the homework takes the remaining 20% weight. If this student scores 80, 70 and 65 in exam, practical and homework respectively, then the weighted mean of these scores obtainable. This is possible through summing up the products of exam score and its respective weight, then dividing the result by to total sum of the three weights, (weighted mean = ((50*80)+(70*30)+(65*20))/(50+30+20))=74) (Salkind,…… [read more]

Chi-Square Analysis: The History, Development, and Applicability of a Common Statistical Tool Term Paper

… Chi Square

An Overview of Chi-Square Analysis: The History, Development, and Applicability of a Common Statistical Tool

There are many different types of information available in the world, and each type can be utilized in very different and highly specific ways depending on both the form of the information and the needs of those utilizing it. These types of information are, in some perspectives, classified into two broader types of information: quantitative and qualitative. Quantitative information is information the can essentially be boiled down to numeric form, and can arise out of either counting or measurement, leading to discrete or continuous data points that can be further analyzed and manipulated to result in deeper understandings of quantifiable phenomena and events. Qualitative data, on the other hand, cannot be reduced to numbers and must be analyzed through other means. Statistics has developed as a field of mathematics that enables researchers to analyze both quantitative and qualitative information in a way that allows for their comparison and analysis in many different ways.

The Chi-Square analysis is one statistical tool that has been developed as a way of analyzing and manipulating qualitative data. Specifically, the Chi Square method was developed in order to compare categorical data in order to determine what type of relationship existed between different qualitative variables (HWS 2010). A drug trial, for instance, might want to compare the number of people receiving a drug to the rates at which their symptoms improved when compared to another group not taking the drug -- the Chi Square analysis test would be a necessary tool in determining the drug's true efficacy.

There are actually several different types of Chi Square analysis that can be utilized, depending on the needs and scope of the research, but the most common of these is the Pearson's Chi-Square test. Karl Pearson was a scientist, philosopher, and mathematician of some renown both during and after his day, and his development of a specific method for analyzing the goodness of fit of a sample distribution and for testing the independence of certain variables/phenomenon (as in the drug trial example given above) is only one of his contributions to the worlds of science and data analysis (Plackett 1983). In 1900, he began working with the Chair of Zoology at the University College of London who supplied a great deal of data to Pearson at a time when his decade of work in correlation (methods of determining the degree to which separate observations occurred together or specifically in the other's absence, suggesting some relationship) and regression analyses (determining the relationship(s) between two or more independent variables on an independent variable) were culminating into the method…… [read more]

Geometry Manipulative Essay

… Geometry Manipulative

Elementary Geometry Manipulative

Introducing complex math problems can be difficult to introduce to elementary students. Yet, there are many patterns within mathematics that, if explained properly, can be learned by young eager minds. Thus, it is with this in mind that this geometry lesson aims to teach angle relationships to fifth graders.

The math level being explored is that of the fifth grade. This is an old enough age to begin implementing algebraic and geometrical conceits within the curriculum. Within this grade level, there are three major standards presented by the National Council of Teachers of Mathematics (NCTM): multiplicative thinking, equivalence, and computational fluency, (National Council of Teachers of Mathematics 1989). Thus, it is a perfect age for the beginning basics of geometry. Understanding the formula for finding missing degrees of angles seems very simple but needs a clear and concise explanation. Therefore, within this lesson plan, the concept of angles, degrees, and the relationships between parallel lines and their corresponding angles will be introduced alongside the corresponding algebraic strategy for finding missing variables. In working with the unknown variable, x in most cases, students begin to understand equivalence by using x as a factor which completes a specific sequence. For example, it is clear within angles that if you know one degree within a split sector, you can find the other with the knowledge that the two equal 180 degrees. Thus, the known degree plus the unknown (x) will equal 180 degrees. This concept will satisfy the beginning workings of multiplicative thinking, equivalence, and computational fluency. In order for students to grasp this concept they will need to work with the provided handout and their pencils.

After practice with this hand out, students should be able to grasp the geometrical…… [read more]

Framing the Research Problem: Basic Steps Essay

… Framing the Research Problem: Basic Steps

The specific steps undertaken when framing a research problem for a study will vary with the type of discipline, subject area of research, and the level of accuracy demanded of the research. For example, a small exploratory study designed to see if there is a market for a new fitness studio in a suburban area will demand a different level of scrupulosity than a statistical study designed to see if a new drug has dangerous side effects within certain demographic populations. However, broadly speaking, the steps of the research process are as follows (Marketing research, 2009, Quick MBA):

Define the problem

The problem must be framed in a clear question format, and the data the research is attempting to accumulate must provide a reasonable answer to that question. For example 'is there a statistically significant correlation between hours of television watched and a child's BMI (Body Mass Index)' or 'what characteristics do mothers say influence their breakfast cereal choice when shopping for the family' are both examples of research-based questions.

Most research is framed as a null hypothesis: in other words, the research statement is the opposite of what the researcher actually wants to prove. In the case of a study regarding television watching and childhood obesity, the null hypothesis might state that there is no correlation between hours of television the child watches and the likelihood that the child's BMI will be in the overweight or obese range. The null hypothesis often states conventional wisdom or the status of the control group.

Step 2: Determine research design

Is the research merely designed to describe a specific phenomenon, such as the average age or weight of a consumer of fast food, in the form of descriptive research? Or is it designed to explore possible reasons for the statistical tendency and take the form of exploratory research? Exploratory research might follow a particular population for a period of time to suggest a correlation, such as between obesity and number of fast food restaurants located near a child's school. A causal research design that aims to show a clear cause-and-effect relationship demands a more narrow study design, and usually a control group. It strives to eliminate other possible variables that could influence the outcome: for example, children who live in areas with many fast food establishments near their school might have less access to other leisure-time pursuits because of poverty and a poor diet at home -- factors beyond the location of fast food restaurants might be more of a cause, rather than the availability of fast food alone. More fast food restaurants…… [read more]

Kde and Kme Kernel Density Estimation (Kde) Term Paper

… KDE and KME

Kernel Density Estimation (KDE)

Abstract-- Kernel Density Estimation KDE is also known as the Parzen Window Method, after Emanuel Parzen. Parzen is the pioneer of kernel density estimation. Density estimation entails constructing an estimate based upon observed… [read more]

Healthcare Practitioners as Well as Other Professionals Essay

… Healthcare practitioners as well as other professionals must know how to deal with statistical data in order to do their jobs on a daily basis. As Rumsey (2003) points out, professionals are presented with statistical data and claims constantly and they must be able to understand how such claims are formulated and whether they are accurate in order to decide what to do about the information presented in such claims. This brief paper will outline some of the most important factors that professionals must understand and apply in order to make practical use of statistics in their work obligations.

Perhaps the first and most important information a professional must have about statistical claims is how the data was gathered and what methodologies were used to crunch the numbers. While the practitioner doesn't necessarily need to know how data was coded or what formulas were used in order to analyze results, a basic understanding of both factors will help the practitioner see if there any red flags in the data. For example, claims that are made about etiology of diseases should be performed under controlled conditions with suitably large and varied populations to ensure that the data is accurate. A study that relies on self-reporting of symptoms in the form of a survey may be adequate for an exploratory study, but not for making determinations about scientific bases for disease or treatment. Therefore the practitioner must understand the difference between quantitative and qualitative research and must know that quantitative research, when conducted with appropriate controls and adequate methodologies can make stronger claims about causal factors.

Rumsey points out the most important statistical measures the practitioner must understand and apply…… [read more]

Geometry Proof Research Proposal

… Geometry Proof

Geometry as a subject learned in school has a primary purpose, and that is to improve the ability of students to reason logically. Logical reasoning is one of the most vital things that a student can learn, not… [read more]

Psychological Research "It Is Difficult to Turn Thesis

… ¶ … Psychological Research

"It is difficult to turn the pages of a newspaper without coming across a story that makes an important claim about human nature" (America Psychological Association, 2003, par. 1).

Often, we come across specific claims about… [read more]

Size in the Field of Statistics Thesis

… ¶ … Size

In the field of statistics, the term effect size is used to refer to the degree of relationship between two variables. Quite simply, it is the size of the effect that one thing has on another. There are many different examples of effect size that we encounter in our daily lives; it is a comparison and judgment we are so used to main that it appears like second nature. Think about the last time you saw a commercial for a product that advertised itself as "30% more effective than the leading brand," or made some similar "more-than" claim. This is a very direct and open use of effect size -- or at least claimed effect size -- to make what the advertisers want you to believe is a mathematical point. They are basically saying that their product, whatever it is, has a bugger effect size on whatever that products is intended to do. For instance, if a commercial for Brand X weight-gain powder for body builders claimed it was 20% more effective than Brand Y, they would be saying that their powder makes your muscles grow 20% more than the other powder -- that their powder has a larger effect size on muscles.

Though understanding effect size is relatively simple, understanding the mathematical formula behind it can be a little trickier. There are actually many different ways to measure effect size, some of them more reliable for certain cases than others. In general, effect size applies to the meta-analysis aspects of statistics. This means it is used to analyze the analysis, in a way -- while other data is analyzed to establish a correlation, effect size is used to measure the strength or degree of that correlation -- or rather, effect size is the measure of that correlation. According to Professor Becker's overview of effect size on the University of Colorado website (2000), one of the most commonly used measures of effect size is Cohen's d (section II). The "d" stands for difference, and this measure is used to measure effect size between two independent groups of data points. The formula for calculating Cohen's d is (M1-M2)/s, where M1 is the mean of the first set of data points, M2 is the mean of the second…… [read more]

Algebra in Daily Life it Strange Essay

… Algebra in Daily Life

It strange, though kind of comforting, to think of the many things in our lives that math in general and algebra specifically are so involved in. Strange because we don't often have to do the mathematical operations involved in order to do our daily tasks and go through our routine, and comforting because the concrete and unchanging nature of numbers adds some certainty to this world that can so often seem chaotic and entirely ungrounded. Even if algebra can't predict what will happen to oil prices and mortgage meltdowns, it can at lest provide us with an explanation of what's happening and how it's happening -- and maybe even why.

I will leave this kind of math to the economists and the members of the Federal Reserve board, however; I have neither the know-how nor the inclination to become involved in that mountain of numbers. Still, there are plenty of smaller ways in which numbers play a role in my every day life. One of those ways is my bicycle. I ride my bicycle almost everywhere, and several times I have had minor breakdowns on the road. These incidents have given me a basic understanding of the way my bike and its various gears move me around, and the functions of the bike and its gears can be expressed algebraically. The actual equations that describe the bike's travel would be quite complex and would require a great deal of measurement and experimentation, but the basic equations that would be needed to calculate effort, speed, and travel time for various distances in various gears can be simulated in a simple thought experiment, using simple numbers.

First, a basic description of the bike is needed. I own a twenty-one sped mountain bike, but around town I'm usually on my three-speed cruiser, and for the sake of simplicity, let's look at the equations pertaining to that bike. Terms that need defining in terms of assigning numerical value are the tire circumference (which is also the distance traveled per revolution of the tire), number of revolutions of the tire that result from each turn of the pedals, and effort…… [read more]

Group Spending Comparison Between British, German, French Term Paper

… Group Spending Comparison Between British, German, French, And Italian Consumers

From the results, we conclude that the Germans, French, and Italians outspend the British in groups, but that the variance is higher amongst Germans. This is shown by the higher upper limit amongst Germans as compared to others. Germans have the highest standard deviation and standard error, which shows that there is more variance than amongst other nationalities. Since the means fall near the median, we can say that our sample of mean are true.

Task 1(b) - Individual Spending






Sample Standard Deviation

Standard Error

Estimate of Mean

Upper Limit

Lower Limit

Comments: The French and Italians have the highest mean, while the British and the Germans are close together in with a lower spending per person. The variances, however, between the British and the Germans are much higher for the Germans, indicating that there may be a subset of higher spenders. The same is true for the French, which could mean a skew on the higher or lower spending range. This difference between Germans and Brits is supported by the higher limit number for Germans. The French and Italians seem to uniformly spend more, as evidenced by their mid-sized SD and relatively high lower limit and relatively low upper limit. The sample data from the Germans is higher, as shown by the higher standard deviation.

Task 1- - Difference in Means of Group Spending

Group Spending






Sample SD

Comments: as we can analyse from the given results that value of z-score lies within the +/- standard deviation for all the values, which means that the null hypothesis level is accepted for the pair which lie in the 95% of confidence limit.

Task 1(d) - Difference in Means of Individual Spending

Group Spending





Sample SD

Standard Error


















Comments: The above results demonstrate that the Z-score values are above SD for all but FI, BI, BF and BG. For these, the null hypothesis is not proven, for all others the null hypothesis is accepted at a 95% confidence limit.

Task 1(e) - Regression

Comments: Above result of regression shows that if none of the nationality go to the holiday so the expenditure for the respective family will be 515.80,550.69,545.70 and 617.42 for respective nationalities as given in the table. And if they go to the holiday so the expenditure to a large extend will be influenced by the slope of the regression equation.

Task 1(f) - Correlation

Comments: As R-square coefficient shows that,71%, 71%, 66% and 67% of the variation may be predicted by change in actual family size, for respective nationalities and the rest of the percentage i.e,29%, 29%, 34% and 33% are unpredicted. The value of the T-statistics of intercept and slope, indicates that they cannot be zero, and for the each nationality the regression equation can be… [read more]

Ethnomathematics: Mathematics and Culture Term Paper

… Ethnomathematics

What is "ethnomathematics," and what role should the study of indigenous counting systems play in the teaching of number and numeration?

Ethnomathematics, as its name suggests, is the study of the interaction between mathematics and culture. Ethnomathematics' most obvious application in elementary school classes may be in social studies units. Students can study how the development of different mathematical methods enabled the construction of various architectural structures that changed the way people lived and worshipped, like the pyramids. Also, the study of mathematics can be integrated into the study of history, as the development of Arabic numbers facilitated the creation of algebra. Mathematics classes may make use of word problems involving students of many ethnic backgrounds or include units such as examining the concept of slope in the designs of Navajo blankets, a technique used by one teacher in his curriculum (Fugit & Smith, 1995)

However, the application of ethnomathematics can be much broader. "Ethnomathematics is the study of mathematical techniques used by identifiable cultural groups in understanding, explaining, and managing problems and activities arising in their own environment" (Patterson, 2005). For example, the manner in which "professional basketball players estimate angles and distances differs greatly from the corresponding manner used by truck drivers. Both professional basketball players and truck drivers are identifiable cultural groups that use mathematics in their daily work. They have their own language and specific ways of obtaining these estimates and ethnomathematicians study their techniques" (Patterson, 2005).Likewise, the practical physics used by engineers is quite different from the theoretical physics explored by physicists in academia. Although ethnomathematics' use of indigenous counting techniques is often assumed to be non-Western in style, indigenous subgroups within Western society also exist. Approaching math from this practical perspective also provides a very concrete answer to the frequent complaint of many children that math has no application to 'real' life.

The importance of ethnomathematics is perhaps best illustrated by examining the origins of the word more closely. Broken down, the word "ethno" refers culture, and culture refers to national as well as a tribal status, professional status, and even age, in deference to Piaget's exploration of how children of various ages have different perceptions of depth and mass (Patterson, 2005). Culture…… [read more]

Browse the PA State Standards and Select Term Paper

… Browse the PA state standards and select the standards on which you would like to base your unit. In a separate document, write two to three paragraphs explaining how your unit of instruction supports local guidelines and student academic content standards. Remember to submit this with your task.


Construct figures incorporating perpendicular and parallel lines, the perpendicular bisector of a line segment and an angle bisector using computer software.

Draw, label, measure and list the properties of complementary, supplementary and vertical angles.

Classify familiar polygons as regular or irregular up to a decagon.

Identify, name, draw and list all properties of squares, cubes, pyramids, parallelograms, quadrilaterals, trapezoids, polygons, rectangles, rhombi, circles, spheres, triangles, prisms and cylinders.

Construct parallel lines, draw a transversal and measure and compare angles formed (e.g., alternate interior and exterior angles).

The standard that I wish to base my unit is on the standards that apply for geometry for grade 8 mathematics. Geometry is one of the most crucial components for the 8th grade level because it supports understanding of higher mathematics at the high school level. Not only is it a very basic component of understanding calculus and linear algebra, but it is the fundamental basis for most science and computer technology classes as well.

The unit that I will create focuses on exploring the properties of the polygon. The polygon has many unique properties and it is a very important unit because it shows students that the squares, rectangles and even circles that they are so familiar with fits within the framework of a greater geometrical understanding, polygons. This essentially ties together all of the random "shapes" that they have had to master into a unified rule for application. According to Pennsylvania standards students need to be able to perform five different functions for polygons, they have to be able to classify polygons as regular or irregular up to the decagon. Identify, name and draw the properties of many different polygons. The unit that I will construct focuses on teaching students the tools necessary for constructing and understanding the properties of polygons. It shows the universality of these shapes and how they can be constructed in accordance with their specific characteristics. My focus will be on understanding the "universal" application of polygons and then applying them to specific shapes so that students do not engage in "memorization" so much as understanding of the root concept of polygons.

My unit fits specifically into the purpose of the PA standards for 8th grade math, and specifically attacks the need for students to have strong geometry experience going into high school. Therefore this unit is critical for the success of students in general and especially when focusing in core understanding.

B. Write four instructional goals for your unit (two for each lesson plan). Enter the goals in the Objectives field in the Unit and Lesson Builder templates.

Unit: Understanding the properties of a polygon (Geometry)

Lesson Plans:

Focus on polygons


Understand what makes an object a… [read more]

Statistical Analysis of Restaurant Patrons Term Paper

… Statistical Analysis of Restaurant Patrons

What type of research question (ie: descriptive, comparative, relationship) is being asked by the researchers?

The research question being asked by the researchers is that of comparing the expression on a patrons' face in a… [read more]

History of Pi Term Paper

… Greek Letter Pi Equations and Notations

Some of the most complex ideas and concepts came from the earliest history of mankind. For example, the notion of Greek letter pi, or the ratio between a circle's circumference and diameter, stems back early biblical times.

Algebra began its development in both the nations of Egypt and Babylonia about 1650 BC. However, historians remain uncertain as to whether or not new ideas traveled between these two countries. Written relics such as papyri and the Hammurabi clay tablets of this time indicate that algebra in Egypt was less sophisticated than that in Babylonia (Gullberg, 1997), in part because the it had a more primitive numeral system. It is also believed that the Babylon influences spread to Greece, 500 BC to 300 BC, then to the Arabian Empire and India, 700 AD, and finally to Europe,1100 AD (Baumgart, 1969)

The equations and notations that are applied today were first used around 1700 BC and standardized by about 1700 AD, primarily because of the invention of the printing press in 1450 and the ability of scholars to easily travel from one location to another. This helped spread ideas across the continents. However, there has never been complete consistency of algebraic notations and differences are still found in various areas of the world. For instance, many Americans use a period with decimals and Europeans use a comma, and thus 3,14 as an approximation for pi or 3.14 (Baumgart, 1969).

The concept of pi was also found in the Bible's Old Testament. For example, 1 Kings 7:23, says: "Also he made a molten sea of ten cubits from brim to brim, round in compass, and five cubits the height thereof; and a line of thirty cubits did compass it round about" (Blatner, 13), meaning, perhaps, that pi = 3. Scholars have debated about this verse for centuries, and they are not much close to knowing the truth now. Some people believe it is just an approximation and others argue.".. The diameter perhaps was measured from outside, while the circumference was measured from inside" (Tsaban, 76).

According to Tsaban (78), most of these scholars do not notice another use of pi that is more helpful: In Hebrew, each letter equals a certain number,…… [read more]

Statistical Language Term Paper

… ¶ … larger population of cases. Term used to represent the population under study.

Population- set of cases from which a sample is drawn and to which a researcher wants to generalize from.

Frequency- symbolized by f, this is the number of cases with a particular value of a variable, or values of two or more variables.

Measures of Central Tendency- representative values for a set of scores that are considered averages of univariate information.

Mean- arithmetical average of all scores; the sum of cases divided by the number of cases.

Median- value that divides an ordered set of scores in half.

Mode- most frequently occurring score on a variable.

Measures of dispersion- the distribution of statistical frequency; distribution about an average or median

Standard deviation- measure of variation in scores. It is also the square root of the variance.

Range- extent of the frequency distribution; the difference between the minimum and maximum value in a frequency distribution

Variance- square of standard deviation; statistical measure of the spread or variation of a group of values in a sample

Standard error- the standard deviation of a sampling distribution.

Descriptive statistics- refers to methods for summarizing information so that information is more intelligible, more useful or can be communicated more effectively.

Inferential statistics- refers to procedures used to generalize from a sample to the larger population and to assess the confidence we have in such generalizing.

Independent variable- variable determining the value of others; the variable in a mathematical statement whose value, when specified, determines the value of another variable or other variables

Dependent variable- an element in a mathematical expression that changes its value according to the value of other elements present

Confounding variable- variable that may be confused for the independent variable; commonly makes researchers fail to distinguish between the independent variable and confounding variable

Sampling- the process of selecting a sample group to be used as the representative or the random…… [read more]

Derivatives Calculus Term Paper

… Mathematics: Derivatives

Derivatives: an Explanation

Derivative" is a mathematical answer to the question, "how quickly does it change?" For instance, if one noted that the national debt was changing rather quickly, one could also say that the national debt had a high derivative. If one specified and went on to say that the national debt was rising rather quickly, one could also say that the national debt had a high, positive derivative. It follows that if the national debt were falling rather quickly (although that is unlikely to happen), one could also say that the derivative of the national debt had a high, negative derivative.

When working with derivatives, it is important to avoid ambiguity. While most would assume that a high derivate was positive, the word "high" is not mathematically defined. For that reason, a certain vocabulary should be used when working with derivatives to ensure effective communication. The words "high" and "low" should be discarded in favor of well-defined terms like "negative" (below zero) and "positive" (above zero).

Establishing that vocabulary begs the question: what if the derivative is zero? If a derivate is the answer to the question, "how quickly does it change," and the answer is zero, that must mean it didn't change at all. Therefore, if one were to say that the national debt was stable, or not changing, then one could also say that it had a derivative of zero.

Using some basic concepts from algebra, another definition for "derivative" can be reached. A common tool is algebra is a graph, a system that plots points based on their value. Each point has two values, labeled "X" and "Y" respectively, and the point is located "X" units to the right (if positive) or left (if negative) of the origin…… [read more]

1234. . .Last ›
NOTE:  We can write a brand new paper on your exact topic!  More info.