Tag Archives: Evaluation

Selecting a Web-based Survey Tool

Have you used an online survey system? They often provide quick and easy solutions for gathering data and can be helpful as part of the design and development process to get feedback from testers, students, and instructors. Most of these products offer an intuitive dashboard for creating survey questions with templates and generate a URL that you can send in an email or post on a website to provide direct access to the instrument.

If you are interested in using a web-based survey system there are a few questions to answer first:

  1. What is your budget? Most of the vendors offer free and paid versions. The free versions, as you might expect, are more limited. 
  2. What types of questions do you need to ask? Multiple choice, open-ended, select all, rank order… take a close look at your instrument see if there are special considerations related to item type.
  3. How many (items and participants) do you anticipate? Free versions often have a maximum number of items per survey and/or a maximum number of responses.
  4. Do you have any special requirements? If you need to add branching logic, for example, or randomly present your survey questions, these capabilities and many others are possible with online surveys.
  5. What are you going to do with the data you collect? These systems allow you to export participant responses in multiple formats – do you need something specific for reporting or analysis purposes?
  6. Do you need to customize? Different systems offer different options for creating custom URLs, adding images (e.g. logos), and creating color schemes. These may be more important if you are creating an instrument for distribution outside of your organization that would benefit from branding.

Recently I had the opportunity to review and select a survey tool for a project associated with Inside Online Learning. I had previous experience with SurveyMonkey and QuestionPro, so started with these first. It didn’t take long to see that are a lot more tools to choose from so I asked my Twitter network for suggestions. That request resulted in a nice list of tools to try – some with personal testimonials, others from the survey companies themselves.

My preference with this project was to go with a free version if at all possible – a brief survey with limited release as a pilot. I reviewed the websites of the 7 survey systems that were recommended and created these comparison charts (below) along the way.  These charts include the features I was looking for, but there are many, many more available including social media integration, secure SSL connections, multiple languages, analytics, etc.

FREE* SurveyMonkey SurveyShare SurveyGizmo

Zoomerang

Rational Survey

# of responses 100 per survey 50 per survey 250 per month 100 per survey 1000 total
# of questions 10 per survey 12 per survey Unlimited 12 questions 100 total / 10 surveys
Logic branching no yes limited no no
Random questions no ? yes no ?
Export responses no no CSV no no
PAID* SurveyMonkey SurveyShare SurveyGizmo

Zoomerang

Rational Survey

mid-range option** $299/yr (Gold Plan) $200/yr (Pro Plan) $588/yr (Pro Plan) $199/yr (Pro Plan) $240/yr (Basic Plan)
# of responses Unlimited Unlimited Unlimited Unlimited 500 total
# of questions Unlimited Unlimited Unlimited Unlimited 5000 total / 50 surveys
Logic branching Yes Yes Yes yes yes
Random questions Yes ? yes yes ?
Export responses Excel, CSV, PDF, SPSS, HTML, XML Excel, CSV, SPSS CSV, PDF Excel, CSV, PDF Excel, CSV, PDF

* These charts are based on my interpretation of the information posted on the websites.

** In most cases there are multiple plans to choose from, offering a range of service packages and price points. This chart lists just one of the price categories. There are more and less expensive options for each system.

Also reviewed:

  • Qualtrics: This is an enterprise level system, which was overkill for my current needs with one small survey.
  • JotForm: Interesting! For me, not quite as intuitive as the others, but a customizable interface with emailed responses.

The comparison charts helped me narrow my list down to two: Zoomerang and SurveyGizmo. I then created my survey in those systems.  My final selection was SurveyGizmo –  It gave me the most room to work with in terms of number of questions and responses allowed, and had a (slightly) more intuitive interface for creating and managing my survey. I deployed it with little difficulty and have been pleased with the results. I was able to create a professional looking survey, insert a logo, and set up matrix-type questions. Should I need to upgrade to a paid version in the future, I will complete another comparison. While SurveyGizmo offers a lot of room in the free version, the paid options seem more costly than the other systems.

What additional features and functions should we consider? If you have deployed an online survey and have tips for selection and/or lessons learned, please consider sharing your recommendations here.

Image credit: stock.xchng

Rubrics. Yes? No? Maybe…

Instructional design work is increasingly standardized. As this happens, data is collected to measure student learning outcomes and rubrics come into play. Lots of them. Instructors use these rubrics (charts with a rating scheme for each element of an assignment) to evaluate student work.

Rubrics provide a way in which the instructor can compare the quality of student work against a set of specific criteria. Ideally, if you have several sections of a course running, each with a different instructor, all will evaluate student work similarly using a standard rubric –  if two different instructors each evaluated Student A’s assignment using the same rubric, their individual evaluations would be the same.

There are pros and cons to the use of rubrics.

Rubrics can be helpful.

  • Rubrics encourage a more objective evaluation of a student’s work, reducing the possibility of comparing students to each other instead of the learning objectives.
  • Have you ever taken a course or submitted a paper and received a letter grade with no details about how that grade was determined? Rubrics can take some of the mystery away from the student’s perspective by clearly stating expectations making the grade seem less arbitrary.

Rubrics can be limiting.

  • Creating accurate ones that measure student learning of a specific outcome is not an easy thing to do. This process requires evaluation of the rubric itself to find out if it is reliable and valid.
  • The use of rubrics may result in less creativity from students working to check-the-box for each of the expectations presented in rubric categories and criteria.

Questions to consider:

  • Are rubrics always appropriate and effective? Think about types of assignments here – performance tasks, creative writing, etc. and context.
  • Who prepares the rubrics? I’ve experienced the hire of an assessment expert, assignment to instructional designer, and assignment to subject matter expert. Rubrics can also be found ready-made and there are online ‘rubric makers’.
  • What about reporting? Are rubric scores/ratings useful beyond the classroom to drive changes in curriculum at a higher level?

It could be argued that while rubrics can and do serve a real purpose, there is a point at which they can become too prescriptive. In this case, the focus becomes the measurement itself. There is a personal piece to learning, something more organic, where a student puts together knowledge and gains skill through his or her own unique set of experiences. Static rubrics can also reduce the ability of the instructors to assess student work from their unique perspectives and expertise. Difficult to capture these things via rating scale. What are your thoughts on pros and cons, your successes and challenges with rubrics?

Resources for your continued exploration of assessment and rubrics:

Image credit: stock.xchng

Course Design – Plan for Evaluation

Evaluation, like needs assessment, is not always given the attention it requires in the process of instructional design. In real world situations, the timeline often drives the work and is usually too short to fully incorporate everything that should be done.

Creating an Evaluation Plan, as part of the initial design, helps you to make a lot of decisions before getting underway and to integrate evaluation tasks as you move forward with a project.

Your Evaluation Plan should include at a minimum:

  • List of objectives for the evaluation – why are you evaluating the instruction and to whom will the results be reported?
  • Description of the data you need to collect and why – what kind of information do you need to collect in order to find out if the instruction is effective?  This can cover a wide range of measures, including:
    • Content accuracy
    • Learning outcome achievement
    • Usability of delivery format
    • Cost-effectiveness of the project
  • The logistics of how the evaluation will take place – How, when, where, and who will be involved in evaluation? Will you use surveys, administer tests, conduct interviews, etc.?

There are a lot of options in terms of models. You’ll find these to be very comprehensive in most cases. Consider creating a customized plan for your project or work context.

There  are full examples of  evaluation plans available online. Two to review:

What is your experience with evaluation as part of the instructional design process? Please consider sharing your experiences related to priority, timeframe, and method. Is evaluation conducted by members of your design team or by an outside group?

Photo credit: Pink Sherbet Photography, Flickr

Connecting – Networked Learning

This post is my reaction to the George Siemens presentation on 9/29. The main topic was connectivism, but he covered much more ground ranging from a review of learning psychology theorists/theories to artificial intelligence and neuroscience. Using a couple of the presentation’s prompts as a guide, here are the ideas that resonated with me.

LightMyPath-FaithGobleHow do we teach (design) differently?

Since I am an instructional designer, not an instructor, I modified this question a little: How do we design formal educational experiences differently? As noted in the presentation, we have technologies available that allow us to store information and knowledge (and lots of it) outside of ourselves, outside of our own memories. These technologies offer ways to “off load part of our thinking”. Designing courses, particularly ones that will be delivered online can use these technologies, should incorporate these storage tools in ways that make the massive amounts of stored information accessible to learners, and allowing them to move beyond. Designers are thinking more about how to get students to interact and engage with these knowledge stores through course assignments and activities. The days of weekly quizzes are not gone, but I see them less and less as a ‘must-have’ presented by a faculty content expert.

George Siemens also brought attention to the idea of resonance. One of the definitions of this word is “a quality of evoking response“. What resonates with a student? This is a question instructional designers should respond to more often when working with development teams, especially ones that include teaching faculty. I often ask the question: how should a learner be different after completing the course? Perhaps this question should be tweaked to further delve into resonance. What has meaning to the learner? What will have meaning to the learner? Motivation is part of this. Context is a part of this. Engagement is a part of this.

Capturing that opportunity to engage a student is related to resonance. Identifying that opportunity is another thing. More careful evaluation techniques might help. End-of-course surveys are fairly common, but maybe adding interviews or focus groups with students throughout a course, especially in its first run, would be helpful. Certainly not all students are motivated by the same things, and not all students find resonance in the same things within a course.  What about including students in the course design process? Not instructional design students, but students from the department to which the course being designed belongs. Analysis (learner) and Evaluation seem to be the two areas most likely to be abbreviated or left behind completely in course design. Why?  Time and budget constraints, I suppose, but think about the lost opportunity there.

What about lurkers? Which is what I suppose I am, in the eci831 course where I find these presentations. What resonates with them and how are they engaged? What is their motivation for being in the course and for lurking? George suggests that being a lurker may not be a good thing. Not a bad thing, mind you, but a lost opportunity. There is an assumption that those who lurk are 1) less knowledgeable and 2) less confident members of the group. The idea is that these beginners could be helpful to the overall learning process of both their fellow learners and their instructors, if, they allow themselves and their own learning processes to be transparent to the others. In doing so, they offer a new and different perspective from that of their expert instructors and add to the experience of the rest of the class.

Designers should consider lurkers as part of the audience, finding ways to pull these people into the conversation and making it more appealing for them to want to choose to be transparent to the other participants and their instructors. Alternative assignments might be a way, particularly in F2F or blended situations that easily lend themselves to this – students could choose to participate in synchronous discussion or asynchronous discussion but experience both. I once worked with a faculty member who taught one of those undergraduate, auditorium courses with little class participation, except when he opened up a space in the courses companion LMS site. There he saw not only active participation, but also small study groups forming. This kind of thing could be designed into a course.

Where do we turn for guidance?

George pointed out that the youth culture of today is making up its own rules about how these technologies should be used, how to participate in networks, etc. Their parents and teachers aren’t modeling these things, showing them the ropes. They didn’t have these kinds of technologies and networks. It’s a similar situation in higher education. We need to turn to those who are actively using these technologies and networks. Encouraging these individuals, groups, and institutions to talk openly about what they are doing, to document what works and doesn’t work in their context(s) is enormously important. Disseminating this information should be more instant than publishing books and in journals. It just takes too long to get the word out. This will mean changing the mindset of higher ed at-large regarding what is appropriate and scholarly work. While many people, like George Siemens, are actively blogging, can you get tenure this way? Maybe not.

Other stuff to pass along…

Photo credit: Faith Goble, Fickr