24:3 (2009:09) 24th Conference: 2009 Conference Evaluation Report

September 15, 2009 at 3:12 pm | Posted in Conference Reports, Evaluation & Assessment | Leave a comment

JUNE 3-7, 2009

Submitted August 6, 2009 by:  2009 Evaluation & Assessment Committee

Ann Doyle Fath (chair), Janice Lindquist (co-chair), Jana Brubaker, Sarah Corvene, Smita Joshipura, Barbara McArthur,  Virginia Rumph, Martha Spring, Christina Torbert.

NASIG’s 24th annual conference was held in Asheville, North Carolina.  The conference featured three preconferences, three vision sessions, thirteen strategy sessions, fourteen tactics sessions, and nine poster sessions.  Other events included an opening reception at the Marriott Renaissance (the headquarters hotel), a special dinner and tour at the Biltmore House, and a reception at the Crest Center.

This year, 242 of the 448 conference attendees filled out the online evaluation form.  This 54% response rate reflects a drop of 9% from last year’s response rate (63%, or 328 of 520), which had been an increase of 9% over the previous year.  This was the second year that evaluation forms were only available online.  As a special incentive to fill out the forms, the Executive Board once again approved a drawing for a free conference registration for 2010 or 2011.  The drawing was entered by 176 of the individuals who filled out the evaluation forms. The winner will be announced in the NASIG Newsletter.



Respondents were asked to give ratings on a scale of 1 to 5, with 5 being the highest rating.  The overall rating for the 2009 conference was 4.31, almost equal to last year’s conference, which rated 4.35 overall.

Ratings for several aspects related to conference facilities and local arrangements averaged 4.22, slightly lower than last year’s 4.30.  Comments reflected a love/hate attitude about the conference location in Asheville.  Many had major difficulties with the logistics and/or expense of travelling to the conference site.  Most offered the opinion that they loved Asheville and the conference’s proximity to downtown shopping and dining.  This helps to explain that even with the negative aspects relating to travel and expense, respondents still rated the Asheville location an average of 4.35, higher than the previous two years’ ratings of 4.15 (Phoenix) and 4.18 (Louisville).

The meeting rooms (4.16) and hotel rooms (4.43) received a slightly lower rating than last year.  Negative comments related to a lack of satisfaction with the Sheraton, the location of the Internet Cafe (not in the headquarters hotel), and the lack of free wireless in all sleeping and meeting rooms.  Many commented that the meeting rooms had a variety of serious problems with audibility, some due to the shape/layout of the room and others due to noise easily penetrating from adjoining areas.

The meals (4.07) and breaks (4.11) were also rated somewhat lower than last year.  Attendees were overwhelmingly pleased with the opportunities for breaks and small group interactions.  About twice as many comments noted a desire for light snacks at the breaks as those who appreciated the absence of temptations.  A number of respondents requested a sugar-free, caffeine-free option other than water at the breaks.

Social events rated 4.18, almost equal to last year’s 4.19 rating.  Many attendees noted (as they did last year) that they would prefer to have organized sign-up sheets for dine-arounds as in previous years.  Comments also indicated that people still desire late-night social times/activities/spaces designated as they have in the past.

Online conference information, including the conference website, forum, and conference blog, rated 4.20 (last year, 4.24), 3.78 (last year, 3.58), and 3.77 (last year, 3.51) respectively.  Comments suggested that there may be some confusion over the purpose of the forums and blog, including why information went out in different ways and their intended application before, during, and after the conference.

For the second year, NASIG used an online store for conference souvenirs rather than having to order, stock, and sell products on-site.  Most respondents (79%) have not visited the store or have no opinion about it.  Those who are happy with the selection came in at 19% and those who are not at 2%. Some indicated that they would have bought items if they’d seen them on-site, but would not remember to visit the website (CafePress).

The majority of respondents were pleased with the pace and scheduling of the conference, though many comments suggested moving the time of the final vision session to an earlier slot to allow those who had to catch afternoon flights to attend.

Many attendees expressed their thanks to the Conference Planning Committee and Program Planning Committee for all their hard work.


This year the program followed a “no-repeat” format where most sessions were not repeated.  Of those who commented on this aspect of the program, most preferred the session not to repeat.  Respondents were asked if the layout and explanation of program choices was easy to understand.  This area received a 4.16, increasing for the second year in a row, up from 3.98 last year (Phoenix), and from 3.47 in 2007 (Louisville).

Respondents were also asked if there was a balance in the types of programs offered. This aspect rated 3.96, nearly the same as last year’s 4.02, up from 3.95 the previous year.  Again, like last year, few people recommended the Electronic Resources & Libraries conference as a model for how to do the program.  The largest complaint about the balance of the program was the perceived lack of cataloging/metadata-related sessions during the regular conference.  Those who could not spare the time or expense of the preconference (or had taken the SCCTP workshop in another setting) had sparse selections available to them during the main program times.


This year the conference featured three vision sessions.  Peter Morville’s “Ambient Findability: Libraries, Serials, and the Internet of Things” received a 4.32 rating. “Measuring the Value of the Academic Library: Return on Investment and Other Value Measures” with Carol Tenopir received a 3.98 rating.  The final vision session, “What Color Is Your Paratext?” with Geoffrey Bilder rated a 4.51.  The average rating for vision sessions this year was 4.27, up from last year’s average of 4.07.

The thirteen strategy sessions this year generated ratings from 3.32 to 4.49, with an average rating of 4.04.  Nine of the programs rated 4.0 or higher, with the highest rating going to “Playing the Field: Pay-Per-View E-Journals and E-Books,” presented by Lindsey Schell, Katy Ginanni, and Benjamin Heet.

There were fourteen tactics sessions offered in Asheville.  Ratings ranged from 3.56 to 4.28 with an average of 3.67.  Seven sessions scored 4.0 or higher. The highest-rated tactics session was Dani Roach’s “Moving Mountains of Cost Data: Standards for ILS to ERMS to Vendors and Back Again.”

Nine poster sessions were presented this year.  Ratings ranged from 3.49 to 4.08, averaging 3.77.  Lisa Kurt’s “Making Usage Data Understandable with Visual Representation” received the highest rating of the group.

There were three preconferences offered this year with ratings from 4.50 to 5.00, with an average rating of 4.73.  The SCCTP “Electronic Serials Cataloging Workshop” received the highest overall rating of the group.



In Asheville, the user group sessions averaged a 3.80 rating and the informal discussion groups averaged a 3.78 rating, both up from last year, even though the comments reflected major difficulties in the logistics, especially of the informal discussion groups (one group lacked a leader, other groups were too large for the tables, and the room was ill-suited to concurrent group discussions so many could not hear). The majority of respondents would like to continue both types of sessions, especially the informal discussion groups.  The two main observations about the user groups consisted of the fact that one person often used more than one vendor for various products, making the choice of which to attend difficult, and that vendor representatives often were not present, which defeated the perceived purpose of the session.  Two respondents commented that they missed the vendor speed dating session from last year.

The First-Timers/Mentoring Reception rated a 4.20, up from 3.93 in 2008, with over 90% of the respondents favoring the continuation of this event in the future.  The brainstorming session received a rating of 3.74.  Sixty-four percent of respondents support continuing this event in the future.  Comments indicated that the session did not truly consist of “brainstorming,” but that it was much less contentious than those in past years.  The business meeting rated a 3.63.  The “Meet the Board Members” session received a 3.29, down from the 3.47 rating of 2008, and majority (60%) support for its continuation at future conferences.  Comments revealed that many attendees did not remember that this session took place at all, could not find it on the conference schedule, and could not determine its purpose.  Board members who commented thought it might work better with a less formal, structured approach.  One respondent suggested name-tag flags with a special designation so they would be easily identifiable.


Respondents by Organization Type


As in past years, academic library employees represented the largest group (72.6%) of respondents.  This includes university (152), college (21), and community college (2) librarians.  Responses from the vendor and publisher community, including subscription vendors (8), publishers (8), database providers (1), and automated systems vendors (1), comprised 7.5% of the total respondents, down from 9.5% last year.  Attendees from specialized libraries, including medical (9), law (8) and special or corporate libraries (7) made up 12.4% of respondents.  Other types of institutions included government, national, or state libraries (5.4%); public libraries (2.5%); students (0.8%); and those selecting “other” (0.8%).

Respondents were asked to describe their work, selecting more than one category as applicable.  The largest respondent groups identified themselves as serials librarians (51.9%), electronic resources librarians (38.2%), acquisitions librarians (28.6%), and catalog/metadata librarians (24.5%).  Collection development librarians comprise 16.2% of respondents, technical services managers make up 14.1%, and licensing/rights management positions also constitute 14.1%. Reference librarians comprised 14.2% of the respondents.  All other categories were selected by fewer than 10% of respondents.


When asked for their amount of serials-related experience, the majority of respondents (53%) are in the first decade of their careers, including those with less than a year (6), 1-3 years (37), 4-6 years (42), and 7-10 years (42). Those with 11-20 years experience comprise 23% of respondents and those with more than 20 years comprise 24%.


Most were repeat NASIG attendees: 39% of respondents had attended 1-5 previous conferences, 17% had attended 6-10, 11% had attended 11-15, 5% had attended 16-20, and 2% more than 20.  First-time attendees represented 26% of respondents.

The Evaluation & Assessment Committee would like to thank everyone who took the time to fill out the online evaluation forms.  We continue to be impressed each year with thoughtful comments that reflect a strong interest in continuing to improve upon the high quality conference NASIG puts on each year.  Your comments and feedback are vital to the success of future NASIG conferences.


Leave a Comment »

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Blog at WordPress.com.
Entries and comments feeds.

%d bloggers like this: