EDT 8230
Smart Learning Objectives
Learning Objectives
Upon completion of this content you should be able to:
-Explain the educational technology known as simulations
-Explain the educational technology known as web conferencing
-Describe the difference between the research measurement of validity and reliability
Upon completion of this content you should be able to:
-Explain the educational technology known as simulations
-Explain the educational technology known as web conferencing
-Describe the difference between the research measurement of validity and reliability
INTRODUCTION
On this page, you will learn about a few common educational technologies and educational research methods used to study the effectiveness of these technologies. I will be listing out details regarding Simulations and Web Conferencing. Additionally, I will take you through a research study for each of these technologies.
Educational Technology
The two main technologies I will review in this page are Simulations and Web conferencing. Both of these technologies will be required as my work transitions from traditional face-to-face classroom style trainings to a virtual training program. They are one of the key items needed to deliver our training materials to our virtual workforce
Simulations are a way to allow learners to learn by doing. They usually include a series of trials that allow the learner the chance to make decisions and then experience the impact of making the correct, or incorrect, decision. This is an extremely effective way to teach through experiential learning when the cost of making a mistake is high or the technology you are teaching is not yet available in its final form. These types of learning event are very common in Military, Medical and Technology fields. The simulation could simulate how to use a new system or interaction with a living being.
Web Conferencing allows learners and facilitators to meet virtually using phone/VoIP audio conferencing and computer based video conferencing. This technology is critical to holding effective learning events with learners and facilitators in a variety of geographical locations. This is also very effective when there are no common physical locations for the learners and the facilitator to gather.
Simulations are a way to allow learners to learn by doing. They usually include a series of trials that allow the learner the chance to make decisions and then experience the impact of making the correct, or incorrect, decision. This is an extremely effective way to teach through experiential learning when the cost of making a mistake is high or the technology you are teaching is not yet available in its final form. These types of learning event are very common in Military, Medical and Technology fields. The simulation could simulate how to use a new system or interaction with a living being.
Web Conferencing allows learners and facilitators to meet virtually using phone/VoIP audio conferencing and computer based video conferencing. This technology is critical to holding effective learning events with learners and facilitators in a variety of geographical locations. This is also very effective when there are no common physical locations for the learners and the facilitator to gather.
Research MEASUREMENTS
Reliability and validity are crucial measures for any experiment and the associated research. The two terms are often misunderstood for each other so I feel we should begin with a clear definition of each, along with a few examples. Finally, I will close out with a look at how we use these topics in relation to research at the care center.
Reliability is the process by which researchers evaluate the ability of a test to produce similar outcomes over a period of time (Salkind, 2017). When evaluating a test’s reliability the researcher has to determine a few scores and reliability tests can help with this. The first is the observed score, or the actual score from the test results. The next score is the true score, which is a measure of the true value of the variable being tested. This can be very difficult to ascertain because of the third score that must be considered, the error score. The error score is the measure of all of the things that can affect an observed score. Essentially the error score plus the true score produces the observed score (Salkind, 2017)
Two examples of reliability are test-retest and parallel-forms. Test-retest is a measure of reliability that measures the stability of a test by retesting the same group of participants using the same test at two different times (Salkind, 2017). Often times this type of reliability is looked at to ensure there was a minimal impact from variables outside of the tested items. Parallel-forms is a measure of reliability that measures the equivalence of alternate forms of the same test given to the same participants (Salkind, 2017). This type of reliability helps to ensure that the test itself did not affect the test results.
Validity is the process by which the researcher evaluates the test being conducted to ensure that it is testing what the researcher is intending it to test (Salkind, 2017). There are a few things to keep in mind when testing validity. First, you need to understand that validity refers to the actual results from a test and not the actual test results. Next, it is a general test not an all or none type of test meaning a test could be partially valid. The third item to consider when evaluating validity is that the test needs to be evaluated in the same context in which the test was conducted.
Reliability is the process by which researchers evaluate the ability of a test to produce similar outcomes over a period of time (Salkind, 2017). When evaluating a test’s reliability the researcher has to determine a few scores and reliability tests can help with this. The first is the observed score, or the actual score from the test results. The next score is the true score, which is a measure of the true value of the variable being tested. This can be very difficult to ascertain because of the third score that must be considered, the error score. The error score is the measure of all of the things that can affect an observed score. Essentially the error score plus the true score produces the observed score (Salkind, 2017)
Two examples of reliability are test-retest and parallel-forms. Test-retest is a measure of reliability that measures the stability of a test by retesting the same group of participants using the same test at two different times (Salkind, 2017). Often times this type of reliability is looked at to ensure there was a minimal impact from variables outside of the tested items. Parallel-forms is a measure of reliability that measures the equivalence of alternate forms of the same test given to the same participants (Salkind, 2017). This type of reliability helps to ensure that the test itself did not affect the test results.
Validity is the process by which the researcher evaluates the test being conducted to ensure that it is testing what the researcher is intending it to test (Salkind, 2017). There are a few things to keep in mind when testing validity. First, you need to understand that validity refers to the actual results from a test and not the actual test results. Next, it is a general test not an all or none type of test meaning a test could be partially valid. The third item to consider when evaluating validity is that the test needs to be evaluated in the same context in which the test was conducted.
Two types of validity are concurrent validity and predictive validity. Concurrent validity is a way to measure how well a test can estimate a criterion by looking at the correlation between test results and the criterion results (Salkind, 2017). Generally, this is used to evaluate how well a test can estimate current test performance. The predictive validity measure looks at the ability of a test to predict the success of a test based of specific criterion(s) (Salkind, 2017). This is often used when the researcher is looking for a way to predict future test validity. These tests of validity provide researchers with increased confidence that the test is in fact measuring what it should be measuring and allow the researcher to continue along the path s/he is going on or to make course corrections based on the validity of the tests that are being evaluated.
In my career field, validity and reliability are both very important when evaluating the effectiveness of training formats and facilitators. The information gathered is often used to determine if we need to change the way in which we train our associates or the person facilitating the classes. For example, this past new hire season we developed a set of test question to be delivered to the associates to assess their ability to retain the material that was being trained. This proved to be an effective measure of success for the associates. Before we could begin using the questions though, we had to determine their reliability and validity. For reliability, we focused on test-retest to ensure that we achieved consistent results across all classes. We did this by evaluating each class’s performance on a daily basis to ensure consistency across all of the training sessions. For validity, we focused on content validity took the questions we prepared and had them reviewed by the SMEs in the departments to make sure they were valid and that the questions tested the specific learning that the associate would need in order to do the work we are training them for.
In my career field, validity and reliability are both very important when evaluating the effectiveness of training formats and facilitators. The information gathered is often used to determine if we need to change the way in which we train our associates or the person facilitating the classes. For example, this past new hire season we developed a set of test question to be delivered to the associates to assess their ability to retain the material that was being trained. This proved to be an effective measure of success for the associates. Before we could begin using the questions though, we had to determine their reliability and validity. For reliability, we focused on test-retest to ensure that we achieved consistent results across all classes. We did this by evaluating each class’s performance on a daily basis to ensure consistency across all of the training sessions. For validity, we focused on content validity took the questions we prepared and had them reviewed by the SMEs in the departments to make sure they were valid and that the questions tested the specific learning that the associate would need in order to do the work we are training them for.
QUalitative study review
The Qualitative study below looks at the effectiveness of Simulations and how an effective research study would evaluate the effectiveness of using simulation on learner retention. Some opportunities do exist with the nature of the study described below and I will be updating the presentation to provide a more qualitative approach. The updated version will look at how simulations impact the learner's affinity toward learning the procedural knowledge. This is extremely helpful for me in my current career field as we are currently in the process of virtualizing our training curriculum and this evaluation will allow me to fully understand the risks and benefits of this technology.
quantitative STUDY REVIEW
The Quantitative study below looks at the effectiveness of Web Conferencing and how an effective research study would evaluate the effectiveness of using simulation on learner retention. This is extremely helpful for me in my current career field as we are currently in the process of virtualizing our training curriculum and this evaluation will allow me to fully understand the risks and benefits of this technology.
Please complete the knowledge check below so we can better understand the effectiveness of this lesson.
References
Salkind, N. J. (2017). Exploring research (9th ed.). Boston: Pearson.