Tag Archives: Evaluation

Selecting a Web-based Survey Tool

Have you used an online survey system? They often provide quick and easy solutions for gathering data and can be helpful as part of the design and development process to get feedback from testers, students, and instructors. Most of these products offer an intuitive dashboard for creating survey questions with templates and generate a URL that you can send in an email or post on a website to provide direct access to the instrument.

If you are interested in using a web-based survey system there are a few questions to answer first:

  1. What is your budget? Most of the vendors offer free and paid versions. The free versions, as you might expect, are more limited. 
  2. What types of questions do you need to ask? Multiple choice, open-ended, select all, rank order… take a close look at your instrument see if there are special considerations related to item type.
  3. How many (items and participants) do you anticipate? Free versions often have a maximum number of items per survey and/or a maximum number of responses.
  4. Do you have any special requirements? If you need to add branching logic, for example, or randomly present your survey questions, these capabilities and many others are possible with online surveys.
  5. What are you going to do with the data you collect? These systems allow you to export participant responses in multiple formats – do you need something specific for reporting or analysis purposes?
  6. Do you need to customize? Different systems offer different options for creating custom URLs, adding images (e.g. logos), and creating color schemes. These may be more important if you are creating an instrument for distribution outside of your organization that would benefit from branding.

Recently I had the opportunity to review and select a survey tool for a project associated with Inside Online Learning. I had previous experience with SurveyMonkey and QuestionPro, so started with these first. It didn’t take long to see that are a lot more tools to choose from so I asked my Twitter network for suggestions. That request resulted in a nice list of tools to try – some with personal testimonials, others from the survey companies themselves.

My preference with this project was to go with a free version if at all possible – a brief survey with limited release as a pilot. I reviewed the websites of the 7 survey systems that were recommended and created these comparison charts (below) along the way.  These charts include the features I was looking for, but there are many, many more available including social media integration, secure SSL connections, multiple languages, analytics, etc.

FREE* SurveyMonkey SurveyShare SurveyGizmo

Zoomerang

Rational Survey

# of responses 100 per survey 50 per survey 250 per month 100 per survey 1000 total
# of questions 10 per survey 12 per survey Unlimited 12 questions 100 total / 10 surveys
Logic branching no yes limited no no
Random questions no ? yes no ?
Export responses no no CSV no no
PAID* SurveyMonkey SurveyShare SurveyGizmo

Zoomerang

Rational Survey

mid-range option** $299/yr (Gold Plan) $200/yr (Pro Plan) $588/yr (Pro Plan) $199/yr (Pro Plan) $240/yr (Basic Plan)
# of responses Unlimited Unlimited Unlimited Unlimited 500 total
# of questions Unlimited Unlimited Unlimited Unlimited 5000 total / 50 surveys
Logic branching Yes Yes Yes yes yes
Random questions Yes ? yes yes ?
Export responses Excel, CSV, PDF, SPSS, HTML, XML Excel, CSV, SPSS CSV, PDF Excel, CSV, PDF Excel, CSV, PDF

* These charts are based on my interpretation of the information posted on the websites.

** In most cases there are multiple plans to choose from, offering a range of service packages and price points. This chart lists just one of the price categories. There are more and less expensive options for each system.

Also reviewed:

  • Qualtrics: This is an enterprise level system, which was overkill for my current needs with one small survey.
  • JotForm: Interesting! For me, not quite as intuitive as the others, but a customizable interface with emailed responses.

The comparison charts helped me narrow my list down to two: Zoomerang and SurveyGizmo. I then created my survey in those systems.  My final selection was SurveyGizmo –  It gave me the most room to work with in terms of number of questions and responses allowed, and had a (slightly) more intuitive interface for creating and managing my survey. I deployed it with little difficulty and have been pleased with the results. I was able to create a professional looking survey, insert a logo, and set up matrix-type questions. Should I need to upgrade to a paid version in the future, I will complete another comparison. While SurveyGizmo offers a lot of room in the free version, the paid options seem more costly than the other systems.

What additional features and functions should we consider? If you have deployed an online survey and have tips for selection and/or lessons learned, please consider sharing your recommendations here.

Image credit: stock.xchng

Rubrics. Yes? No? Maybe…

Instructional design work is increasingly standardized. As this happens, data is collected to measure student learning outcomes and rubrics come into play. Lots of them. Instructors use these rubrics (charts with a rating scheme for each element of an assignment) to evaluate student work.

Rubrics provide a way in which the instructor can compare the quality of student work against a set of specific criteria. Ideally, if you have several sections of a course running, each with a different instructor, all will evaluate student work similarly using a standard rubric –  if two different instructors each evaluated Student A’s assignment using the same rubric, their individual evaluations would be the same.

There are pros and cons to the use of rubrics.

Rubrics can be helpful.

  • Rubrics encourage a more objective evaluation of a student’s work, reducing the possibility of comparing students to each other instead of the learning objectives.
  • Have you ever taken a course or submitted a paper and received a letter grade with no details about how that grade was determined? Rubrics can take some of the mystery away from the student’s perspective by clearly stating expectations making the grade seem less arbitrary.

Rubrics can be limiting.

  • Creating accurate ones that measure student learning of a specific outcome is not an easy thing to do. This process requires evaluation of the rubric itself to find out if it is reliable and valid.
  • The use of rubrics may result in less creativity from students working to check-the-box for each of the expectations presented in rubric categories and criteria.

Questions to consider:

  • Are rubrics always appropriate and effective? Think about types of assignments here – performance tasks, creative writing, etc. and context.
  • Who prepares the rubrics? I’ve experienced the hire of an assessment expert, assignment to instructional designer, and assignment to subject matter expert. Rubrics can also be found ready-made and there are online ‘rubric makers’.
  • What about reporting? Are rubric scores/ratings useful beyond the classroom to drive changes in curriculum at a higher level?

It could be argued that while rubrics can and do serve a real purpose, there is a point at which they can become too prescriptive. In this case, the focus becomes the measurement itself. There is a personal piece to learning, something more organic, where a student puts together knowledge and gains skill through his or her own unique set of experiences. Static rubrics can also reduce the ability of the instructors to assess student work from their unique perspectives and expertise. Difficult to capture these things via rating scale. What are your thoughts on pros and cons, your successes and challenges with rubrics?

Resources for your continued exploration of assessment and rubrics:

Image credit: stock.xchng

Course Design – Plan for Evaluation

Evaluation, like needs assessment, is not always given the attention it requires in the process of instructional design. In real world situations, the timeline often drives the work and is usually too short to fully incorporate everything that should be done.

Creating an Evaluation Plan, as part of the initial design, helps you to make a lot of decisions before getting underway and to integrate evaluation tasks as you move forward with a project.

Your Evaluation Plan should include at a minimum:

  • List of objectives for the evaluation – why are you evaluating the instruction and to whom will the results be reported?
  • Description of the data you need to collect and why – what kind of information do you need to collect in order to find out if the instruction is effective?  This can cover a wide range of measures, including:
    • Content accuracy
    • Learning outcome achievement
    • Usability of delivery format
    • Cost-effectiveness of the project
  • The logistics of how the evaluation will take place – How, when, where, and who will be involved in evaluation? Will you use surveys, administer tests, conduct interviews, etc.?

There are a lot of options in terms of models. You’ll find these to be very comprehensive in most cases. Consider creating a customized plan for your project or work context.

There  are full examples of  evaluation plans available online. Two to review:

What is your experience with evaluation as part of the instructional design process? Please consider sharing your experiences related to priority, timeframe, and method. Is evaluation conducted by members of your design team or by an outside group?

Photo credit: Pink Sherbet Photography, Flickr