Developing Voice in Digital Storytelling Through Creativity, Narrative and Multimodality

Reviewed By: Grace Song, Avery Campbell, Anna Johnson, Carla Axume, Millie Jones

Link to article: http://seminar.net/index.php/volume-6-issue-2-2010/154-developing-voice-in-digital-storytelling-through-creativity-narrative-and-multimodality

Synopsis / Summary (article’s core research question)
In this study, Monica Nilsson explores how digital storytelling has the potential to significantly alter the way children develop literacy and creativity, communicate experiences, explore new meaning and knowledge, and express themselves. Digital storytelling, in this context, is defined as “a multimodal narrative text comprising pictures, music, speech, sound and script.” Nilsson’s research revolves around a nine-year old boy, Simon, who struggles with reading and writing. When given the opportunity to express himself through digital stories, Simon becomes deeply engaged, which Nilsson argues is because digital storytelling became the trigger for his interest in literacy.
In her research, Nilsson explores this core research question: What impact does digital storytelling have on children’s ability not only to master structural writing techniques, but also “communicate experiences, explore new meaning and knowledge, and perform self-representation and self-expression,” and by so doing develop, “real voice” in their writing?

Research Methods Used
In her research, Nilsson analyzes Simon’s digital stories using multimodality and visual analysis. Machin (2007) defines multimodality as a way to express that “the way we communicate is [not done] by by a single mode…” but rather by a combination of visual, sound, and language (p. x). “Multimedial approaches systematically described the range of choices available and how they are used in context… [and] therefore describes the grammar of visual communication” (ibid. p. ix).
For structure in her analysis, Nilsson uses the three basic requirements of semiotic modes of communication: ideational (“states the affairs of the world” – e.g. yellow stands from sunlight or warmth), interpersonal (“represents and communicates the social and affective relationships towards what is being represented” – e.g. yellow stands for happiness), and textual (“about the coherent whole, genres, and how parts are linked together” – e.g. a color for headings to “show they are of the same order”) (Nilsson, 2010, p. 5).

Findings and Conclusion (of the article)
Digital storytelling provides many opportunities to engage students in different multimodal forms of learning, as in the case of Simon, who was effectively able to learn to read and write through digital storytelling. Although literacy is traditionally understood as learning to read and write, Nilsson describes literacy as “drawing conclusions, making associations, and connecting text to reality.”
From this study, Nilsson found that Simon was learning “interpersonal meta-function,” referring to the interaction between producer and receiver, as well as “textual meta-function,” or the linking of parts and their composition. All of this was possible through digital storytelling. Additionally, Nilsson found that Simon’s digital stories were not “randomly assembled images, music, speech, captions and sound,” but rather “consciously, creatively, well reasoned and well crafted composition(s).” Digital storytelling also furthered Simon’s understanding of literacy as a social and cultural activity.
Nilsson concludes that though digital storytelling is a different process for learning to read and write than traditional methods, expressing, creating meaning, and communication still hold the common value of both and provide a significant way of learning that helps overcome learning challenges.

Questions
As our group considers digital storytelling and the ways it supplements education-related research, we have several questions relating to both the article and digital storytelling in general.
One question regards teaching theory. How can digital storytelling be assimilated into school environments, but not forced upon students? Or should it be assimilated in such a way that students need to complete a graded digital storytelling assignment? In today’s education world, there are many thoughts on the different types of learners and standardization. Should digital storytelling be encouraged for those who are interested and naturally more creative, or should it be used to bring out the creativity of those who may not at first be interested?
Another question for consideration is how can digital storytelling be used in libraries? As libraries are constantly changing and adapting to newer technologies and ideas, librarians will need to provide programs and opportunities for patrons that optimize learning and engagement with library resources. How can digital storytelling be a part of this? As many educational researchers are now putting a great emphasis on early literacy, it is pertinent to consider how technology can be a part of digital storytelling in libraries, too?

Final Thoughts / Conclusion
When considering using digital storytelling in schools, but not forcing it on every student, optional or extra credit assignments may be considered. This option may replace traditional assignments for students who have kinesthetic learning styles, or who are simply interested in exploring a new learning method. Conversely, mandatory digital storytelling assignments are intriguing because they could help students unlock untapped creative potential, unrealized through traditional learning methods. Adding a digital storytelling element to school curriculums would help students think and express themselves in new and creative ways.
Digital storytelling can also help children and youth, like Simon, who attend their local library. The library provides another place, as well as additional resources and materials, for children and youth to effectively learn and become literate. As Nilsson’s argues, libraries promote literacy in children and youth by providing them a place to find their voices and connect to the texts they’re creating.

References
Machin, D. (2016). Introduction to multimodal analysis. London: Hodder Arnold. Retrieved from: https://books.google.com/books?hl=en&lr=&id=mwZfDAAAQBAJ&oi=fnd&pg=PP1&dq=%22Introduction+to+Multimodal+Analysis%22+Machin&ots=84X_VkGPpt&sig=WedPbytmGnWOQJfqxOiZPs8CQZE#v=onepage&q=%22Introduction%20to%20Multimodal%20Analysis%22%20Machin&f=false

Nilsson, M. (2010). Developing voice in digital storytelling through creativity, narrative and multimodality. International Journal of Media, technology & Lifelong Learning 6(2). Retrieved from: https://doaj.org/article/17d2a778143742a78fe9f9d517b92e4d

Students’ Perceived Challenges in an Online Collaborative Learning Environment: A Case of Higher Learning Institutions in Nairobi, Kenya

Reviewed By: Loren Reese, Kara Trella, Maryanne Doran, Melissa Horton, and Brittany Ely

Link to article: http://www.irrodl.org/index.php/irrodl/article/view/1768/3124

Article synopsis and core research questions

Muuro, Wagacha, Kihoro, & Oboko (2014) examined student perceived challenges of using online collaborative tools, such as Moodle and Blackboard, on Web 2.0 (and social media) while pursuing higher education in Nairobi, Kenya. The article states that online learning has risen in popularity in Kenya as a result of the increased demand of higher education. It goes on to address issues of faculty and infrastructure support for e-learning.

Using a questionnaire, a survey was conducted in two public universities and two private universities to identify perceived challenges by student respondents in their online collaborative learning environment. The authors state that three primary questions guided their research design: To what extent do students collaborate online while doing group work? What are the components of online collaborative learning, which learners perceive as challenging? And, is there any significant relationship between university type (public or private) and the perceived challenges in using an online collaborative learning environment? (Murro et al., 2014).

In their literature review, the authors focused on a philosophy of education termed constructivist theory, in that learning is an active process in which learners construct new ideas or concepts based upon their current/past knowledge. In seeking to define the term collaborative learning, the authors deferred to the 1999 Dillenbourg book Collaborative-learning: Cognitive and Computational Approaches, which states that collaborative learning is a, “situation in which two or more people learn or attempt to learn something together.” Adding that the situation is collaborative if the participant learners are relatively at the same level and, “can perform the same actions, have a common goal and work together” (Dillenbourg, 1999).

Among various issues, five were identified as major challenges, these were: lack of feedback from instructors, lack of feedback from peers, lack of time to participate, slow internet connectivity, and low or no participation of other group members.

The goal of the research was to inform educators in Kenya as to how they can improve collaborative online learning to provide the best possible education success.

Methods used to answer the research question

The researchers relied on a descriptive survey, which allowed for the accurate summation of the varied and complex experiences of the study participants, and created a body of data that was substantial enough to enable meaningful statistical analysis. The questionnaire that they developed was based on conceptual elements that were brought to light in the literature review, and consisted of thirty questions. All but one of these survey items required Likert scale or multiple choice fixed responses, with just one open-ended question in which respondents were asked to describe their worst online group experiences.

The respondents were students at two public and two private Kenyan universities, and with the assistance of instructors, a purposive sample group of 210 students was identified, all of whom were taking at least one online course or module. Assistance from one or more experts was solicited to streamline the wording and content of the questionnaire. After a period of two weeks was allowed for students to complete the online questionnaire, there was an 87% response rate (Muuro et al., 2014). The authors noted that not all of the data analysis that they performed was included in this paper, and may eventually be included as an important component of a larger future study (Murro et al., 2014).

Findings and conclusions

The research found multiple issues the students faced with collaborate work at the Kenyan universities. The largest issues related to the ability to access online work and the lack of participation among group members.

Despite Nairobi, Kenya having one of the best internet infrastructures in the country, 30% of the participants reported unreliable internet service, which made accessing and participating in group work challenging (Murro et al., 2014). Murro et al (2014) report if places like Nairobi do not have adequate infrastructure, then not only does it need to be improved there, but in other areas of Kenya as well, so that more people can have the opportunity to go to a higher education institution.

Other issues that students had were finding time to participate in group work and lack of feedback from their instructors. Over 50% claimed that finding time was an issue and 47% claimed they were not getting enough feedback from their professors (Murro et al., 2014).

Unanswered questions and what future research might address

This study acknowledges that future studies should “adopt large scale empirical approaches” to encompass different universities and regions in Kenya (Muuro et al., 2014). Since the fiber optic network is well established in Nairobi, further studies are necessary to gauge internet connectivity in other regions and countries beyond Nairobi. The ability of users to join/utilize social networking sites to link with other students and faculty would be an interesting area of future research, as these types of sites could be crucial to online support and student success and retention.

Other possible future studies include investigating the effect of collaborative learning and critical thinking skills as well as, “improving the level of knowledge constructed in blended e-learning platforms”(Muuro et al., 2014). Challenges faced in using online collaborative tools, as well as determining the correlation between teaching ideologies and effective instruction, are areas that merit further examination. Ways to increase instructor involvement with collaborative work in order to support online students more effectively would also be interesting questions for subsequent projects.

This study reported a gap between workload distribution between online collaborative groups in public versus private universities. Public university students reported less issues with workload sharing than private universities (Muuro et al., 2014). One theory proposed for this discrepancy, is the ability of public university students to work independently, with less instructor oversight, than private university students. Future research could reveal the root causes of this difference and provide interesting theories in this area.

Although the research for this survey asked for demographic information, it was not used to analyze the results. But for this issue, and perhaps others, it might help to theorize why these are issues for the students. More in depth findings could result in looking at how gender, age, geographical region, education background, and socioeconomic levels affect the responses. There might be patterns among students with similar demographics that could point to areas that need improving in order to help students do collaborative work successfully.

References:

Dillenbourg, P. (1999). Collaborative Learning: Cognitive and Computational Approaches. Advances in Learning and Instruction Series. Elsevier Science, Inc.

Muuro, M. E., Wagacha, W. P., Kihoro, J., & Oboko, R. (2014). Students’ perceived challenges in an online collaborative learning environment: A case of higher learning institutions in Nairobi, Kenya. The International Review of Research in Open and Distributed Learning, 15(6).

Knowing How Good Our Searches Are: An Approach Derived from Search Filter Development Methodology

Reviewed By: Shaquira Pinson & Chris Gaudette

Link to article: https://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/25382/19290

Article Summation

The article we have selected presents a study done to determine the usefulness of an online tutorial, called SmartSearch, which is provided through a web portal for librarians and professional researchers to improve advanced search skills. The name of the article is Knowing How Good Our Searches Are: An Approach Derived from Search Filter Development Methodology. It appeared in the open source online journal Evidence Based Library and Information Practice in October, 2015.

The study was done at Flinders University as part of a project called Flinders Filters in association with a non-profit medical support network called CareSearch. Flinders Filters is a project with expertise in developing specialized search filters that are added on to large online medical databases like PubMed for a specialized clientele. This study tries to transfer evidence based concepts used in search filter development to educational tutorials on refining and developing search skills for advanced researchers within the health and medical research fields (Hayman, 2015).

The research environment was exclusively online and anonymous as the research group interfaced with subjects through a set of online modules accessed through a website, an anonymous survey, and an optional user feedback function within the website. Given that the study took place in Australia, no mention was made of a specific institutional review board other than “ethics approval was obtained from Flinders University to conduct the survey” (Hayman, 2015). The article does not provide a formal literature review but does dedicate a few chapters informally at the outset to that purpose.

Key Research Questions

The Flinders Filters Project and CareSearch clearly have an expertise in developing search filters for specified clientele with advanced medical research needs. During the course of their work, they have determined what they believe to be a lack of advanced research guides and/or tutorials available online. The study proposes this primary research question: Can concepts used in the development of search filters also be used as an evidence based approach and/or used to develop search skills of information professionals and others who are interested in advanced searching. To prove the theory, the research group has won a grant award to develop a web module free for anybody to use. The module is intended to enhance search skills by introducing four steps of a structured evidence based search strategy. The website also offers a survey for users to take that gain significant contextual and personal data for the study (Hayman, 2015). Other important secondary questions that could be addressed are demographic trends of users, geographic trends of use, personal reactions, user experience responses, and patterns of use to name a few.

Methodology

The study is a mixed method approach in that it utilizes quantitative methodology in the collection of raw statistical data and qualitative data in the collection of personal and contextual data from the survey. Each process would be considered an elicited data collection method. While it is unclear to what level users were informed on the use of data they contributed, no identifying data was used in either portion of the study. The epistemology of the study is interpretivist due to the constructed nature of the study and the attempt to understand human reaction to an effort of solving a perceived problem. The qualitative methodology would be best described as action research as the study is trying to measure the effectiveness of a solution to a problem and find further enhancements to the solution (Salmons, 2016).

Findings

The initial findings were that the modules were effective at teaching advanced research skills and that evidence based process in research is valuable. Furthermore, concepts used in the technical development of search filters should be used in the education of advanced search and information literacy skills. The findings were based off the quantitative analytics of website total use with over 6,000 unique users augmented by the responses of the survey (of which 50 users participated in the follow up survey).

We felt that the findings only indirectly and subjectively supported the conclusion. While the 6,000 users indicate interest in the tutorial, it says little qualitatively about the experience of the 6,000 users and/or what they were using the tutorial for. In terms of qualitative data, unfortunately only 50 users responded to the survey, which represented less than 1 percent of total unique users measured on the website. That would seem too small a sample in a mixed method study to come to any conclusive qualitative findings.

Secondary findings were that the site was clear and easy to use. Users originated primarily from the United Kingdom, Australia, and the United States. The value of the tool was seen in potential uses of staff training, continuing education, job evaluation, and bringing structure and process to search approaches. A common concern of imparting the evidence based system proposed was that the time constraints of the process did not justify the value of the results.

Further Questions

The study summons as many questions as it answers but proposes no further questions within the narrative of the article. Some notable anomalies within the quantitative statistics were the amount of return use from the geographic areas of the United Kingdom and Australia as compared to all other countries. It would be interesting to explore why other areas of the world did not follow up one-time use. A further need is in larger sample sets of qualitative data and exploring whether the low response rate of the survey and user feedback is evidence of dissatisfaction with the website and tutorial. Given that the primary concern of users with implementing the techniques suggested in the tutorial was time, we were curious if the concern was rooted in a lack of overall time to complete job functions or a concern that these processes were not worth the required investment of time? As it is, the study provides an interesting preliminary investigation into effective online tools to develop search expertise and the value of structured evidence based approaches in the specialized area of medical research.

References

Hayman, S. (2015). Knowing How Good Our Searches Are: An Approach Derived from Search Filter Development Methodology. Evidence Based Library & Information Practice, 10(4),
7-23.

Salmon, J. (2016). Doing qualitative research online. London: Sage Publications.

Assessment of E-learning Needs Among Students of Colleges of Education

Reviewed By: Jon Andersen, Alisa Brandt, Megan Ginther, Desiree Gordon

Link to article:  http://tojde.anadolu.edu.tr/yonetim/icerik/makaleler/936-published.pdf

Assessment of E-Learning Needs among Students of Colleges of Education
Reviewed by Group 4: Needs Assessment of Embedded Librarianship. Group members: Megan Ginther, Jon Andersen, Alisa Brandt, Desiree Gordon

Article Synopsis:
In this article Azimi (2013) begins by exploring the literature for a definition of e-learning and to identify it as a growing field for higher education in many fields. In addition, he explores the literature for needs assessment articles on which to model his study. This allows Azimi a working understanding of different methods for providing e-learning and different approaches to needs assessments. Azimi (2013) then develops a needs assessment survey to identify the needs of various groups of students in regards to e-learning opportunities. This needs assessment was carried out with students at the University of Mysore. The main idea was that if students were introduced and offered e-learning opportunities it would mean that as educators they would be more comfortable using e-learning in their future classrooms. Azimi (2013) wants to identify areas where students need more support to become comfortable with e-learning both as students and as future educators. He theorizes that differences in gender, government aid status, and subject area of specialization may have an impact on student needs.

Methods Used:
The major method used by Azimi (2013) was a qualitative study utilizing a needs assessment survey to identify how e-learning methods could meet the needs of students in a Bachelor of Education program. Azimi (2013) examined the results to identify differences in needs for male and female students, government financial aid students, and students in the Sciences, Arts or Languages. The survey was a paper-based survey distributed randomly to students in the Bachelor of Education program. The survey consisted of two parts. A demographic data section where he collected information about gender, work experience, financial aid standing, and which subject area the student specialized in. The second part was designed to identify the student’s understanding of e-learning system components such as: instructional design; multimedia components; internet tools; computers and storage devices; and connections and service providers. He targeted 374 students with his survey.

Findings and Conclusions:
Azimi (2013) identified that most students were comfortable navigating internet tools and using video streaming and text tools. Areas where students identified the least comfort were with instructional design, mobile technology, asynchronous learning, and mobile technology. There was no noticeable difference between male and female students in these needs or in government-aided versus unaided students. There was also no significant difference between students specializing in Science, Arts or Languages. He did identify a significant correlation between gender and government aid; more female students were in government-aided programs than male students. In addition Azimi (2013) concludes that students need more support in understanding and clarifying the benefits of e-learning. He writes, “Moreover, students (as future teachers) should be made aware of the potential of various e-learning technologies for enhancing the teaching and learning process. Clarification of the incentives and elimination of obstacles to fully integrate e-learning is needed” (Azimi, 2013, p.282).

Unanswered Questions and Future Research:
This study does identify some areas where the colleges can strengthen their instruction regarding e-learning to help students be more comfortable with it and offering it in the future. However, Azimi’s (2013) major theory was not supported by this study and he could not prove that gender, financial aid status or subject area of study had any impact on students’ confidence with e-learning. We believe his theory was not supported by the research he cited in his literature review. Many of the articles he cited spoke more about e-learning confidence being built by explicit teaching about e-learning for educators. A future direction for follow up would be to explore whether students who have taken courses through e-learning are more comfortable than students with no e-learning background. Another factor that could be considered would be whether explicit courses teaching students about e-learning could have an impact on student confidence with e-learning methodology. We suspect that students who are taught about e-learning methods and have taken courses through e-learning would be most comfortable offering e-learning courses in the future. While students with no e-learning courses would be the least comfortable.

While student background had no impact on student’s comfort with and knowledge of e-learning system components Azimi (2013) does identify that there seems to be a link between gender and the need for government aid and in addition between subject area of concentration and government aid. Both of these issues could be explored in more depth although we believe there have been other studies dealing with these issues, especially gender issues. It seems intuitive that women would require more government aid to attend school than men especially, in a developing country where gender equality is still far from achieved. Factors such as less family support and planning for women to attend school would contribute to the need for more government aid. One question we had was whether there were more government aid programs available to encourage women to attend school and whether that would play a factor in the results Azimi (2013) reported.

The link between need for government aid and subject area of study could be an interesting follow up study to try to determine what causes students in the arts to require more financial aid than students in the sciences. We theorize that this may be due to more women going into the arts fields than the sciences so this result may also be tied into the gender issue. However, it could also indicate that students from lower socio-economic stratas are more likely to enroll in Arts fields rather than sciences and we wonder if this could be due to gaps in early science education caused by lack of access to science education tools. There are many questions in this area that could be explored.

References:

Azimi, H.M. (2013). Assessment of e-learning needs among students of colleges of education. Turkish Online Journal of Education, 14(4). Retrieved from https://doaj.org/article/3e8c2b89e99d4157900bcf8c54032b26

A Randomised Controlled Trial Comparing the Effect of E-learning, with a Taught Workshop, on the Knowledge and Search Skills of Health Professionals

Reviewed By: Heather Campbell, Marisa Eytalis, Gloria Nguyen, Bracha Schefres, Stacy Sorrells

Link to article: http://ejournals.library.ualberta.ca/index.php/EBLip/article/view/54/155

Article synopsis and core research question

Nichola Pearce-Smith (2006) conducted a study to answer her main research question: Is there a significant difference between self-directed learning using web-based resources and learning in a traditional classroom-based workshop for healthcare professionals trying to improve their database searching skills? Her objective was to test two different null hypotheses: 1) There is no difference between the two interventions being compared, and 2) That there would be no difference before and after either educational intervention.
Recognizing that searching for evidence is an essential job skill that healthcare professionals need to have, and trainings are thought to improve their skills, Pearce-Smith’s literature review could only find some evidence of that and most studies were “small and methodologically poor” (p. 45). This study was published in 2006, so there was not a lot of evidence showing the effects of e-learning and whether or not it improves knowledge in health care professionals. She did find one qualitative study that found no differences between two groups who participated in workshops versus e-learning. Since the literature on this subject was not evident, the author designed and conducted her research study between September 2004 and September 2005.

Methods used to answer the research question

The methods used in this study was a controlled trial of 17 health care professionals randomized into two groups. The study population was a convenience sample drawn from the Oxfordshire Radcliffe Hospitals NHS Trust (ORHT) and recruited via an invitation letter by mail or email from the Trust’s intranet and email list or elicited through posters and leaflets displayed throughout the hospital. Researcher acknowledged that participants might have been more inclined to volunteer for the trial since they may wish to learn or improve their searching skills. In order to be included in the study, participants must work within ORHT and have access to the Internet. To test both null hypotheses, participants were randomized into two groups using computer-generated random numbers. Both groups completed a search exercise before the training to get a baseline of their ability and another after training to measure if any improvement was achieved. The first group (WG) attended a two-hour search skills workshop conducted by a librarian while the second group (EG) were shown how to access the online learning module which included question formulation, study design, free text thesaurus and Boolean searching training, and examples of searching PubMed and Cochrane Library.

Findings and conclusions

Findings:
The study found that while there was a slight increase in the knowledge and search skills of each group, this increase was not significant. Also, while the workshop group did perform better at devising a search strategy, there was no other notable difference in the improvement between the two groups answering the first null hypothesis. The second null hypothesis, that there would not be a difference between the knowledge and search skills of either group after the online or workshop training, was accepted.

Conclusions:
While the second null hypothesis was accepted, the results were largely inconclusive due to the small sample size. Several factors contributed to the lack of participation. The initial research question is still an important one, and the authors hope that others will build off of their research and methodology to explore this topic further.

Unanswered questions you have and what future research might address

More than once the author/researcher pointed out that the results of this study are inconclusive due to the small sample size. Contributing factors to the small sample size and inconclusive results were that compensation was not provided to the participants, clinical staff participants are generally hard to acquire, and a second test search skills test administered at a later day may reveal true search skill retention. The author/researcher provides her suggestions to gain significant results which include, seeking participants early, providing compensation “use different contact methods” and make “inclusion criteria as wide as possible.”
Collaboration with library staff to conduct this type of study is necessary and once established, along with the e-learning course, a longer study period may be possible. The librarian(s) could collect participant data, provide consent forms and pre-intervention tests (to find or willing participants and the researcher could obtain the extant data at a later date/once the (WG) group reaches a larger number. The researcher can also obtain the e-learning information on demand and postpone the second round of testing (in a clinical setting) once a larger number of study participants has been reached. Stratified randomization could then be used to analyze a larger participant group. Another possible question would be to find out how accessibility was addressed/accommodated (i.e. language barriers) for this study?

A thoughtful attempt to answer your own questions

Is there a significant difference between self-directed learning using web-based resources and learning in a traditional classroom-based workshop for healthcare professionals trying to improve their database searching skills? The author’s objective was to test two different insignificant theories and ultimately, there are no differences between the knowledge and search skills of either group after the online or workshop training.
The other question raised was due to the limited number of participants, how was accessibility addressed for this study? Since the study was easier to maneuver with such a small sample group, the accessibility of the study was more attainable. The author designed the online learning resource, and this was made available on the web, and it was password protected so that only EG participants could access it. The content included question formulation, study design, free text, thesaurus and Boolean searching. An experienced librarian, who used methods such as presentations, live Internet demonstrations, and interactive group work, taught the WG.
Lastly, how would the sample group be increased from being a small group to a larger sample group? Health professionals could have been encouraged to participate by offering incentives such as prizes: book tokens, wine, free passes to festivals, gift cards, etc. Another thing could have been more funding, like any other study or research, there are monetary funds that need to be utilized to complete a successful research. Web-based resources are costly, and not everyone can have easy access to such materials.

References:

Pearce-Smith, N. (2006). A randomized controlled trial comparing the effect of E-learning, with a taught workshop, on the knowledge and search skills of health professionals. Evidence-Based Library and Information Practice. 1:(3), 44-56.

Hashtag Functions in the Protests Across Brazil

Reviewed By: Melissa Balok, Edward Pantoja, Marie Ingram, Chloe Noland, Emma Weinberg

Link to article: http://sgo.sagepub.com/content/5/2/2158244015586000

Introduction
In the world of Web 2.0, tagging behavior is emerging as an important area of qualitative research for studies interested in learning more about language, community, cultural identity, and much more. In this article, Recuero et al (2015) perform a qualitative study on the use of hashtags and tagging behavior on Twitter in a specific political and regional area: the June 2013 political protests in Brazil. By comparing the localized tweets to “a theoretical background of the use of Twitter and hashtags in protests and the functions of language” (Recuero et al, 2015), the study was able to manifest not only specific context-based trends in tagging behavior, but also identify larger trends in virtual communication, personal incentive, and emotional versus recruiting behaviors.

The article begins by giving a history of the political climate and state of Brazil at the time the hash-tagging sample was occurring. The authors explain how tweets at this time were used as both a mobilization tool by protesters, as well as a way for the community to keep abreast of real-time occurrences at the rallies. This personalization of politics is further exemplified through a discourse on the effects of social media on people’s social movements, personal lives, and documentation/spreading of information during critical times. Delving into the function of language, which can be broken up into six main sections, the authors applied these linguistic classifications to the conversational and organizational quality of tweeting. Core research questions included: what are the types and communicative functions of hashtags used during protests; how do co-occurrences of hashtags depict different meanings and functions; and what are the trends in hash-tagging behavior of users as events unfold over time?

Method
Dealing with an overwhelming content-base, Recuero et al (2015) methodically analyzed a large dataset of tweets. 2,321,249 tweets were analyzed between June 13-20 in 2013. These dates were chosen because they consisted of the most Twitter activity during the protest. To effectively create and organize a large dataset, 35 keywords were tracked and inputted into the open source software yourTwapperkeeper to archive tweets that contained keywords. Researchers then attempted to classify the meaning of hashtags and their co-occurrences. Answers to these questions helped to create a context around the function of hashtags and how different co-occurrences could depict different meanings.

In order to objectively analyze the large dataset, Recuero et al (2015) used a coding procedure to categorize hashtags. Jakobson’s (1960) model of six main language functions was used to categorize hashtags according to their linguistic and communicative purposes. From this basic foundation for classification, hashtags which were found together within the same tweet were also classified. Due to the overlap of functions in a single tweet, a hierarchy was needed to establish and identify the dominant function of each tweet. The criteria used to determine the dominant function was to ask, “What is the purpose of this message?” Co-occurrences were also used with the previous criteria in order to categorize tweets. Lastly, the 500 most retweeted tweets were analyzed using similar mechanisms to create a context around the quantitative analysis.

Findings/Conclusion
Classification of the hashtags within the dataset were thus painstakingly paired to each of Jakobson’s (1960) six language functions. Contextual hashtags that frequently related to geographical location – where the event was happening – were classified as “referential”. Hashtags which indicated user emotion, thought and opinion, including protesters’ demands, were labeled as “expressive/emotive”. “Conative” hashtags were those that urged action and served to motivate other protestors, and “metalingual” hashtags, which referred to the content of the tweet.

In regards to the co-occurrences, the authors found that the most prominent types of hashtags that occurred together were conative-conative, encouraging action and strengthening the message through emphasis. Conative-referential hashtags were also preeminent, combining the call to mobilize with a physical location. Referential-referential hashtags helped to spread contextual information. Other co-occurrences of hashtags functioned to mobilize through opinions/demands, to contextualize the tweet in entirety, or to “sign” the tweet. Re-tweeted tweets were also analyzed, finding that most re-tweets were focused on the live events of the protests as they unfolded.

Overall, the results demonstrated tagging behavior during the protests in Brazil to have several functions: to call others to action, to align and coordinate protesters, to share information including metadata regarding content, and to express and support opinions.

Questions and Further Research
It would be interesting to see studies conducted with the same classification of tweets and hashtags that this study created, but with examination of different protests in different countries. A comparison of the results with the protests in Brazil could make for a better qualitative understanding of how different countries use hashtags, in addition to furthering examination of tagging behavior across countries. Additional questions that come to mind: do Internet users use hashtags in the same way, regardless of language and country of origin? This could lead to bigger behavior studies regarding humans and Internet behavior in general and how humans adapt to technology. Alternately, is there evidence of similar tagging behavior in applications that allow more than 140 characters per post? Recuero et al (2015) briefly mention that the character limit on Twitter could potentially cause users to eliminate tags that are not as important. This leads one to ask, what information about the protests is missing from Twitter? Could additional information be found on alternative ICT platforms, such as Facebook?

References:

Jakobson, R. & Sebeok, T. (1960). Linguistics and poetics. Style in Language, p. 350-377. Cambridge, MA: MIT Press.

Recuero, R., Zago, G., Bastos, M.T., & Araujo, R. (2015). Hashtag functions in the protests across Brazil. Sage Open Journals, published 11 May 2015, doi: 10.1177/2158244015586000

#DITCHTHESURVEY: EXPANDING METHODOLOGICAL DIVERSITY IN LIS RESEARCH, by Halpern, Eaker, Jackson & Bouquin

Reviewed By: Megan Lohnash, Frances Owens, Emmanuel Edward Te, and Janice Christen-Whitney

Link to article: http://www.inthelibrarywiththeleadpipe.org/2015/ditchthesurvey-expanding-methodological-diversity-in-lis-research/

Article synopsis and core research question.

Halpern, Eaker, Jackson, and Bouquin (2015) identify surveys as the most commonly used research method in Library and Information Science (LIS) research. They report that 21% to 49% of LIS research articles utilize surveys as their primary data gathering method (Halpern, Eaker, Jackson, & Bouquin, 2015). This “over-reliance on the survey method limit[s] the types of questions we are asking, and thus, the answers we can obtain” (Halpern et al., 2015). The article attempts to discover why the survey as a research method is heavily favored and uncover possible alternative methods.

Despite being an affordable, quick, and easy to implement data gathering method, many LIS professionals do not have the training required to conduct an effective survey. As a result, LIS professionals may not deliver survey questions, offer suitable response choices, or use surveys in an appropriate manner. Halpern and colleagues propose that librarians generally lack familiarity with research design and implementation to obtain data required for their research questions (see also Kennedy & Brancolini, 2012).

The authors propose that LIS professionals consider using evidence-based library and information practice (EBILP). Eldredge, one of the pioneers of EBILP, argues that EBILP “employs the best available evidence based upon library science research to arrive at sound decisions about solving practical problems in librarianship” (as cited in Halpern et al., 2015). The authors conclude by discussing strategies for choosing the best research methods for a given research question, by describing various research methods available to LIS professionals, and a calling for LIS professionals to “think outside the checkbox” (Halpern et al., 2015).

Methods used to answer the research question.

The authors present a literature review to provide context and suggest alternative research methodologies. They found that earlier studies investigating the research methods used in LIS journal articles used the same definition of research. However, these studies utilized different criteria to select journals for study and different definitions of what constitutes a research article, and influenced by the LIS journals at the researcher’s organization (Halpern et al., 2015).

Findings and conclusions.

The primary finding of this article is that surveys are over-represented as a research method in LIS literature. While the authors acknowledge that surveys are low cost, easy to implement, and appropriate for some research questions, they assert that surveys are frequently used by librarians as a one-size-fits-all data gathering method, which inevitably hampers a researcher’s ability to probe participants for the meaning behind their answers. One example cited in the article draws from a Brown University Library patron satisfaction study in which the researchers used both focus groups and surveys to answer their research question. In the focus groups, “‘clear patterns of deep concern began to emerge’ [obtained via probing] that were not apparent in survey responses, and indeed, that surveys are not capable of obtaining” (Halpern et al., 2015).
The authors conclude with a call for other LIS researchers to enrich their “methodological toolbox,” and “to seek out questions that can be best answered by less frequently employed practice” (Halpern et al., 2015). How research is conducted and analyzed depends on the way the question is being asked, which ultimately influences the kinds of results that can be uncovered. The authors also encourage people to join them in an ongoing discussion on Twitter by using the associated “#DitchTheSurvey” hashtag.
Questions for future research
While it has been interesting to learn that surveys are over-represented in LIS literature, this declaration begs the question of why this one method is used so heavily by LIS professionals as a crutch for their published research. The authors’ assertion that LIS professionals may not be as familiar with other methods and that surveys may be cheap, easy, and effective for reaching a large audience are certainly valid reasons. However, we would be interested in determining whether there is a relationship between the educational history of the LIS professionals and the use of surveys. How robust is the course offering of research methods in LIS schools? Is the survey research trend due to a generational gap that will correct itself over time and with experience? This could be studied with a longitudinal study of a graduating group of LIS professionals, over 5-20 years that also monitors the types of research that they conduct and publish as time progresses.
It may also be useful to analyze the data gathered by Halpern et al. (2015), as well as the complementary data referenced, in order to study the frequencies of methods used in LIS. This would help resolve some of the issues identified by the authors regarding a lack of consistency in analysis caused by selection bias and inconsistent definitions across literature.

References:

Halpern, R., Eaker, C., Jackson, J., & Bouquin, D. (2015). #DitchTheSurvey: Expanding methodological diversity in LIS research. In the Library with the Lead Pipe. Retrieved from http://www.inthelibrarywiththeleadpipe.org/2015/ditchthesurvey-expanding-methodological-diversity-in-lis-research/
Kennedy, M., & Brancolini, K. (2012). Academic librarian research: A survey of attitudes, involvement, and perceived capabilities. College & Research Libraries, 73(5), 431-448. doi:10.5860/crl-276