Ethics – Reflections Post #8

Ethics.  Ethics in qualitative research.  What can one say?  Since everyone’s ideas as to what is and is not ethical can differ – and perhaps more than a little bit – when considering “ethics” in the context of a particular profession or area (i.e. qualitative research), I strongly believe there should be some uniform guidance (dare I say, “directive”) and specified “norms” as to “appropriate” conduct for those within that particular profession or area.  We probably all agree on that.

The problem, however, is what happens when someone in that area or professions behaves in a way inconsistent with those guidelines.   In this regard, I think one of the most interesting things that came out of the discussion in class on this subject involved our various “takes” on how we would handle a particular ethical dilemma.  It was pointed out that there are layers of impact and considerations involved – that is, our determination as to how we would handle the dilemma may be fine from one level of impact (individually, for example, it might be fine for me to blow the proverbial whistle on an ethical breach on the part of a colleague) but cause significant problems – and change the analysis involved at arriving at a decision – at another level of impact (i.e. blowing the whistle on the colleague may negatively impact my entire institution or even my entire profession).   I’m suggesting here that even if there are established ‘norms’ of ethical conduct or behavior, the analysis of how to handle a breach of those norms is far from simple – and simply having “established” ethical norms may not be sufficient to handle these situations.   In this thinking, I am actually reflecting on my experience in law school (I know, I know… lawyers, ethics…).  We were required to complete a course in “ethics” – which, in point of fact, dealt primarily with personal conduct as opposed to professional dealings.  But, we were also required to take a course in Professional Responsibility – which is different.  PR dealt with what a licensed attorney’s obligations are when s/he discovers breaches in professional conduct – including, and particularly, ethical breaches (i.e. when you must decline to represent a client for conflict of interest…mishandling of client funds)- and when you are actually required (as one of the profs so eloquently put it) to “rat out” a colleague.  In essence, the MRPC (Model Rules of Professional Conduct) provide the type of guidance and structure I was alluding to above (and if there isn’t a rule on point to your specific situation, then there is governing body to whom you can apply for direction).   They impose not only sanction on the putative bad actor, but also provide sanction for those who don’t act in accordance with the Rules’ requirements when discovering a bad actor.  In short – the internal debate we might have had concerning the unethical researcher would be considerably shortened were it instead to involve an attorney not behaving in accordance with the MRPC.   Indeed, every licensed attorney must pass the MPRE (Multistate Professional Responsibility Exam) BEFORE they can even sit for the Bar (licensing) Exam itself.  Attorneys found not to have acted in accordance with the MRPC can be disbarred (i.e. their license to practice law is revoked).   Did I mention this is a bad thing??  As a matter of fact, there was a certain US President about 40 years ago who was involved in some questionable conduct around his reelection – and while he was never in fact charged (nonetheless receiving a “preemptive” pardon from his successor to preclude possible indictment) for the criminal conduct of which he was accused, he was a licensed attorney in NY and the NY Bar investigated his actions and ultimately disbarred him for violating these professional “ethics” or “conduct” canons with his actions. That was effectively the only punishment he received (other than, of course, a certain amount of ongoing public censure …) (lesson there:  don’t mess with the State Bar!!)

Mind, I do not necessarily subscribe to any concept of “deterrence”:  I do not for a moment believe that having these rules in place will stop unethical behavior – on the part of attorneys, or of researchers for that matter.  Still, however, I do believe that having “codified” professional ethics or responsibility rules provides the members of the particular profession  1) a common framework within which to analyze their own behavior and that of their colleagues in their professional dealings that is informed by the needs/ requirements/ special considerations of their particular profession; 2) guidance as to their specific obligations in the event they discover a breach by a colleague – essentially taking the decision from their hands for the most part, and thereby avoiding the type of quandary we discussed in class (again – for the most part); and (ideally) 3) a body monitoring compliance to which concerns can be raised for uniform review and advice.  In addition, I feel that even if it doesn’t in fact deter “bad actors” (or, rather, eliminate unethical behavior), this (or indeed any legal schema) provides at least a sense that they will themselves suffer negative consequences for their bad (read, ‘unethical’) acts, and that there is some recognition that the “harm” done is not only to the individual(s) involved but to the broader group (whether to fellow attorneys and their firms, or researchers and their institutions)

Off my soap-box now. ;>)  As regards IS and research, I am sure that, for example, IRBs are generally intended to provide this function – to a degree.  However, my sense at this point in the proceedings (and I could be wrong…) is that there is not a great deal of uniformity between and among IRBs, and that different “rules” apply depending on the specific institutions or foundations through which your research is being conducted.  While I understand and appreciate the sentiment behind the expression “consistency is the hobgoblin of small minds…”(or whatever….)  I do believe that consistency is hugely important when attempting to regulate behaviors and conduct within groups (particularly in the application and implementation/enforcement of the regulations/ laws/ rules).  Hence, to the extent my sense is correct concerning lack of consistency across research institutions etc. in terms of ethics or professional conduct/ responsibility requirements – I would like to see that change in the future.

Content and Discourse Analysis – Reflection Post #7

Once again, I am at a bit of a disadvantage in that I missed class and hence the related exercise.  Nonetheless, I will attempt to offer a few thoughts concerning content and discourse analysis and how they might be applicable in my continued adventures in IS…

Based on the work I have been doing this semester in the iSensor Lab concerning linguistic analysis and language-action cues, I firmly believe that “information” and cues that are available in a F2F setting (such as, for example, an in-person interview) are in large part lacking when we have only the raw “text” itself.  Just as an example, the appearance, body language and even tone of voice of the speaker are lost (at least for the most part  – the cool thing about linguistic analysis is that it attempts to supply some of these cues from within the text itself by way of examining word choice, phraseology, length of communication, wordiness (ahem…) and syntax … but I digress).  That is to say, the information gained in F2F interactions includes both verbal and non-verbal components, and it is difficult at best to “replace” the information – especially the non-verbal information – that is missing when only the “written” evidence of the interaction is available (even if originally in a F2F environment).   So, how much can we really extract from a writing?  It seems it would depend upon what we are attempting to look at in reviewing the writing.  The example of being able to observe court testimony “live”, or even watch a video recording of it, versus reading the transcript comes to mind.   In reading the transcript, we can read the (supposedly) exact words of the parties – But, unless the judge or counsel ask for a specific notation to be made in the transcript concerning the facial expressions or gestures of the witness being interviewed, these are lost when only the text remains.  Inflection and tone are also lost.  For us Rumpole of the Bailey fans, this is specifically dealt with in “Rumpole and the Show Folk, where Rumpole gives several different “readings” of the same “line” from his client’s (an actress) statement concerning the shooting of her husband.  As you can probably imagine, her words themselves – “I shot him.  What could I do with him.  Help me.” – were quite incriminating, on the page (you will note, there isn’t even any punctuation to supply tone or emphasis).    However, with his customary style, Rumpole deftly illustrates that those words could be imbued with considerably different meaning(s), and considerably different information (or data?) is imparted, depending on tone and emphasis.   Same applies to the “Delicious Death” quotation in this week’s lecture notes – When Miss Murgatroyd tries to tell Miss Hinchcliffe what she saw the night of the shooting at Little Paddocks, meaning (or at least interpretation) follows inflection/ emphasis:  She wasn’t there, she wasn’t there, she wasn’t there.  All this to say, if the objective is just to verify “what” was said (or written…) – without interpretation or looking for meaning – then content analysis is great.  It’s a fairly straightforward, unobtrusive means of collecting data and deriving inferences from writings through the process of coding various aspects of the text.  Taking a holistic view of the writing (or body of writings) being explored can even provide a certain degree of context as well.  If, on the other hand, the intent is to interpret or derive “meaning” in an objective sense, it’s a bit problematic.  Bottom line, I think, is that it is important to keep in mind the kinds of “information” that get lost just in looking at text or writing.  Providing these are not what you are looking for, then CA is fine.

Discourse analysis is a bit more challenging – but it does seem to get at some of the linguistic analysis I mention above, which is of interest to me.  As I understand it, it seeks to take a very holistic view of the communications (i.e. writings) in question – and considers both the internal context of the writings (i.e. the context or sense in which certain words or phrases were used) but also the external context of the writing (i.e. the historical/ socio-political factors shaping who was writing, what they wrote, and why).  Not sure this is a good example, but I have in mind how the socio-political circumstances in 17th century England (i.e. English Civil War) informed the writings of Hobbes and Locke – and influenced not only what they wrote about but how the approached/ treated it.  Just as meaning and information can be lost in the absence of the facial expressions, body language etc. available in F2F interactions, and having only the written account of it to go on , I firmly believe also that a good deal of meaning (and information) can be lost if we are not mindful of the broader “context” of the communication in this sense.  I guess in this sense, I see CA and DA as potentially being complimentary to each other- particularly depending upon the topic of investigation – with CA looking at the “micro” level of the writing and DA looking at it from a “macro” level.

Coding in Grounded Theory – Reflection Post #6

Although I missed class, and hence the in-class activity, I will nonetheless offer up a few general thoughts concerning this topic.  I very much appreciated the way in which Strauss & Corbin (1994) described it: a general methodological approach that “explicitly involves ‘generating theory and doing social research [as] two parts of the same process…,'”  and its emphasis on what I believe would fairly be called “organic” theory development wherein the data drives the development and conceptualization of the theory (i.e. a more inductive rather than deductive approach).  As someone who has been firmly entrenched in the Socratic method of argument and reasoning and trained to apply deductive approaches to reasoning, the inductive approach of Grounded Theory is clearly a completely different way than I am used to in terms of conceptualizing, analyzing and discoursing on questions and topics of interest.  I can also understand why Charmaz (2006a) contends (in “Invitation to Grounded Theory”) that the grounded theory approach and process to doing research will “..bring surprises, spark ideas and hone your analytical skills” (p.2), and “foster seeing your data in fresh ways and exploring your ideas about the data…” because data are collected from the beginning of the project with an eye towards theoretical analysis and development.  The researcher is thereby constantly forced to evaluate new data in light of existing data, and the synthesis (or, as Charmaz puts it ‘sense making’) of these data drives new constructs which in turn raises additional questions of interest concerning the phenomenon being studied, and leads again to the collection of new data to be incorporated into the developing construct.  For myself, while I will probably continue to reflexively revert to my deductive roots, I will nonetheless be mindful not only that deductive approaches are not the only approaches but that, particularly in terms of brainstorming and “thinking outside the box” about a problem, there is much to be said for taking an inductive approach – and particularly the approach outlined in Grounded Theory.

With all that said about grounded theory as a general research/ theory development approach, I will now offer a few thoughts specifically concerning coding as a data analysis tool vis-a-vis grounded theory.  Given the inherently iterative nature of grounded theory, I can certainly see where coding of data could be a particular challenge!  Since the data are, essentially, a moving target (constantly being updated and reinterpreted) – even at the time of “initial coding” –  assigning codes to each piece of datum and putting them into nice neat buckets or categories based on these codes (i.e. “focused coding”) would presumably be a constant exercise – whether because reevaluation of the data collectively leads to the conclusion that the original code no longer “fits”, or that a piece of datum doesn’t belong where it was initially “placed”, or because the original buckets themselves no longer “work”.  This, of course, recalls what Charmaz (2006a) indicated concerning the importance of “flexibility” in grounded theory in general – so it is rather difficult to imagine how any other analytical approach would work within the framework of grounded theory.  Indeed, as Charmaz (2006b) also says (in “Coding in Grounded Theory Practice”) – “We play with the ideas we gain from the data…Coding gives us a focused way of viewing (it)” and “Through coding we make discoveries and gain a deeper understanding of the empirical world” (p. 71).  Moreover, coding “gives us a preliminary set of ideas that we can explore and examine analytically… (and) if we wish, we can return to the data and make a fresh coding” (or, implicitly, revise the existing).  In short, coding in grounded theory in particular “is more than a way of sifting, sorting and synthesizing data…[i]nstead [it] begins to unify ideas analytically because [the researcher keeps] in mind what the possible theoretical meanings of [the] data and codes might be.”  I have above characterized Grounded Theory as being an organic research and theoretical methodology, I would submit it’s also fair to say that coding – particularly in connection with employing grounded theory methodology –  seems to be a fairly organic approach to organizing and analyzing the data collected.  I consider, in fact, that this is an example of where a research/ theoretical methodological approach and an analytical/ organizational approach seem to be mutually supportive.

Final Project

Case Study

A case study is unique in that it can be thought of both as a method, and as a research design, depending on the school of thought to which one subscribes.  Though it is named here as the former, the following explanation is constructed such that it is treated as the latter.  The author understands that a case study consists of multiple methods, but that those methods vary depending on the particular case.  As such, each case study must be carefully designed with several considerations in mind.  Before discussing those considerations, it is helpful to examine the definition of “case study.”

Case Study: A Working Definition

To begin with, case study methodology is inherently constructivist.  In a constructivist approach to any phenomenon, reality is seen as being socially constructed, and relative to an individual’s lived social experience (Baxter and Jack, 2008).  Because a case study “investigates a contemporary phenomenon (the ‘case’) in depth and within its real-world context,” the background and experiences of the case’s key players are an integral part of the case (Yin, 2014, pp. 16-17; Gillham, 2000).  In other words, it is impossible to separate an individual from his or her socially constructed reality.

Second, a case study uses several sources and methods to obtain data about the phenomenon under investigation (Yin, 2014).  In fact, the researcher may not be fully aware of what defines the particular case under study until all sources and methods have been thoroughly examined, and the data triangulated (Ragin and Becker, 1992).   As Masoner (1988) pointed out, “the uniqueness of a case study lies not so much in the methods employed (although these are important) as in the questions asked and their relationship to the end product” (p. 15).

Finally, a case study is flexible.  A case study can (1) be adapted to multiple disciplinary and philosophical perspectives; (2) be used to test or build theory; (3) include either qualitative or quantitative data, or both; and (4) accommodate more than one sampling method (Masoner, 1988).  Further, a case study can be used to examine current, observable phenomena or to reconstruct a particular historical case (George and Bennett, 2005).  A case study may include only one, or a combination, of these features.

Given the five interdependent features, case study is a complex approach.  However, Feagin, Orum, and Sjoberg (1991) provide a good working definition of the case study that demonstrates this complexity while still being succinct: “A case study is here defined as an in-depth, multifaceted investigation, using qualitative research methods, of a single social phenomenon.  The study is conducted in great detail and often relies on the use of several data sources” (p. 2).

Planning and Designing a Case Study

After a working definition has been established, considerations for planning and designing a case study can be explored.  It is important to know when to use a case study, as opposed to another type of method or design.  To this end, Yin (2014) provides some specific guidelines for when a case study might be the optimal design:  a case study is advantageous when “a ‘how’ or ‘why’ question is being asked about a contemporary set of events, over which a researcher has little or no control” (p. 14).  Yin recommends conducting a thorough literature review on the proposed research topic in order to determine what is not yet known about the topic, and to formulate a research question.  Once the question has been formalized, the guidelines above can be used in order to determine whether or not a case study is feasible.

Assuming that a case study is determined to be optimal, the researcher must then carefully design his or her study.  This involves consideration of five key components: “(1) a case study’s questions; (2) its propositions, if any; (3) its unit(s) of analysis; (4) the logic linking the data to the propositions; and (5) the criteria for interpreting the findings” (Yin, 2014, p. 29).  With these components in place, the ideal type and category of case study for the proposed research should begin to emerge.

Categories and Types of Case Studies

            Case studies can be classified by both type and category.  The type of case study refers to its scope: how many individual cases are under study and how many units of analysis are included.  The category of case study refers to the study’s purpose: why the researcher is conducting the investigation.

Categories. There are two main categories of case studies: single-case and multiple-case.  Each of these can then also be either holistic or embedded.  A single-case holistic study would look at a single case and a single unit of analysis.  A single-case embedded study would examine a single case, but have multiple units of analysis.  Similarly, a multiple-case holistic study would look at multiple cases, but only a single unit of analysis for each case.  A multiple-case embedded study would examine multiple cases and multiple units of analysis.  In multiple-case studies, the units of analysis would be the same across sites, or cases.  Single-case studies are appropriate when the case is either “critical, unusual, common, revelatory, or longitudinal” (Yin, 2014, p. 51).  Multiple-case designs are appropriate when the researcher wants to show replication, or make a more compelling case in addressing the research questions (Yin, 2014).           Types. In addition to these categories, there are also five types of case studies: exploratory/evaluative, descriptive/illustrative, explanatory/interpretive, intrinsic, and instrumental.  The type of case study one chooses to conduct depends on the purpose of the study, and on the research questions (Baxter and Jack, 2008).

The first three types of case studies are defined in Yin (2014).  The exploratory, or evaluative, case study is intended to serve primarily as the basis for further research.  This type of study explores a phenomenon in depth in order to elicit research questions of interest.  The descriptive, or illustrative, case study is designed to observe and, subsequently, describe some event or behavior “in its real-world context” (Yin, 2014, p. 238).  Finally, the explanatory, or interpretive, case study seeks to discover causation or correlation.

The other two types of case studies are defined by Stake (1995).  An intrinsic case study is best used when the intention of the researcher is to gain a more complete understanding of the phenomenon in question in that particular case.  An instrumental case study, conversely, should be used when the case itself is not the center of interest, and the focus is not on understanding or even generalizing results.  Instead, it is useful when the researcher is looking for some specific insight or working on refinement of a particular theory.   Once the appropriate category and type of case study have been chosen, data collection methods should be considered.

Data Collection Methods in Case Studies

Case studies are unique in that data collection methods are almost unlimited.  However, not all methods are appropriate for all cases.  The research design and research questions should be carefully considered when determining which methods should be used.  Also, though the case study has historically been primarily qualitative in nature, quantitative methods can be employed in complement to qualitative methods.  For example, a survey can be useful as a starting point for a case study.

Surveys.  A survey can be especially useful as a first step in designing a case study.  A survey is designed to collect general information, under natural conditions, on particular variables for a specific sample.  Ideally, survey samples are chosen such that the information collected can be considered to be generalizable to the larger population (Roberts, 1999).  However, the use of the survey as a starting point for a case study does not need such considerations in design, as the population of the case study is the population of interest, and the purpose of conducting the survey is to uncover topics of interest for further exploration using qualitative methods, such as interviews and observations.

Interviews.  “The qualitative interview is the most common and one of the most important data gathering tools in qualitative research” (Myers and Newman, 2007, p. 3).  Interviews can be structured, semi-structured, or unstructured.  A structured interview follows a very specific set of questions and does not deviate to topics outside of those questions.  A semi-structured interview, on the other hand, consists of a series of open-ended questions with room to move off topic if deemed of interest.  Finally, an unstructured interview often begins with a topic of interest about which the researcher wants further information.  Many times, these unstructured interviews begin with a researcher’s field notes following a period of observation (DiCicco‐Bloom, B. and Crabtree, B. F., 2006).

Observations.  Yin (2014) describes two types of observations: direct and participant.  In direct observation, the researcher positions herself such that she can unobtrusively observe the phenomenon in question, leaving her free to make notes on what she observes.  Notes may be taken freely, or a more structured form may be provided for data collection.  Participant observation, as its name suggests, occurs when the researcher participates in the phenomenon, and may or may not be able to take notes while participating.  Gillham (2000) further describes direct observation as “mainly analytical/categorical” and participant observation as “mainly descriptive/interpretive” (p. 52).  There is, of course, room for overlap between the two.

Textual evidence.  Questioning and observing the participants in a particular case often tells a compelling story, but sometimes additional evidence is needed to corroborate that story or expand it.  This is where documents and archival records can be useful.  Reviewing such textual evidence can help to verify factual information gathered during an interview or observation.  Inferences can also be drawn from certain documents or records and used as indications that there is a need to investigate further.  Finally, textual evidence can be used as a preliminary overview of a particular organization or case (Yin, 2014).

Physical artifacts.  One final method of data collection worth mentioning is the study of physical artifacts.  Though traditionally used primarily in anthropological research, technology and its products (computer code, activity logs, etc.) can be considered physical artifacts and may be relevant to the case under study (Yin, 2014).

Rigor in the Case Study

            The case study has been criticized as a research design for its perceived lack of rigor (Yin 2014).  According to Gibbert, Ruigrok, and Wicki (2008), this concern can be addressed by vigilant attention to study design and careful consideration of the four criteria for establishing rigor, as defined by the positivist tradition: construct validity, internal validity, external validity, and reliability.

Construct Validity. Construct validity refers to the use of appropriate “operational measures for the concepts being studied” (Yin, 2014, p. 46).  Establishing construct validity is particularly challenging because a case study is often exploratory, and operationalization must occur during the course of the study.  For this reason, the researcher must be extra diligent about refraining “from subjective judgments during the periods of research design and data collection” (Riege, 2003, p. 80).  Further measures for increasing construct validity include the following: using a variety of sources to support findings, connecting evidence to form a chain that can be followed logically, and having case study participants review data for inconsistencies and misunderstandings (Riege, 2003; Yin, 2014).

Internal Validity. Internal validity deals with the establishment of causation.  Traditionally, the biggest challenge in establishing internal validity is in ferreting out spurious relationships that do not actually show causation (Yin, 2014).  Case study research, however, is more concerned with establishing the ability to make inferences from the case that will hold up in the general population (Riege, 2003).   Measures for increasing internal validity include pattern matching, explanation building, addressing alternate explanations, and using logic models (Riege, 2003; Yin, 2014).

External Validity. External validity involves the generalizability of research findings, and can be addressed initially by properly constructed research questions.  Specifically, “how” and “why” questions should be included in order to arrive “at an analytic generalization” (Yin, 2014, p. 48).  Other ways to increase external validity are to employ theory in the design of single-case studies, and replication logic in multiple-case studies, as well as to clearly define the scope of the study (Riege, 2003; Yin, 2014).

Reliability. Reliability refers to the ability to repeat research and get the same, or reasonably similar, results.  The concern when dealing with qualitative research is the subjectivity of the researchers.  Ways to circumvent this concern involve keeping detailed records and accounts of the research.  Yin (2014) suggests using a “case study protocol” and developing and maintaining a “case study database” (p. 49).

Questions

  1. Design a case study using Yin’s five key components. Be sure to include the rationale for the type and category of case chosen, as well as the methods.
  2. Describe appropriate methods of analysis for case study research.
  3. Describe the compatibility between information worlds and case study research in the context of a specific research problem. Specify the problem and research questions, then describe how you would use case study and information worlds to approach the research.
  4. Choose two examples of case study research in education and/or LIS. Compare, contrast, and critique the studies and describe how you would have approached the case differently, if applicable.

 

 

References

Baxter, P., & Jack, S. (2008). Qualitative case study methodology: Study design and implementation for novice researchers. The Qualitative Report, 13(4), 544-559.  Retrieved from http://nsuworks.nova.edu/tqr/vol13/iss4/2

DiCicco‐Bloom, B., & Crabtree, B. F. (2006). The qualitative research interview. Medical education, 40(4), 314-321.

Eisenhardt, K. M. (1989). Building theories from case study research. The Academy of Management Review, 14(4), 532–550. doi:10.2307/258557

Feagin, J. R., Orum, A. M., & Sjoberg, G. (1991). A Case for the case study. Chapel Hill: University of North Carolina Press.

George, A. L., & Bennett, A. (2005). Case studies and theory development in the social sciences. Cambridge, Mass: MIT Press.

Gerring, J. (2004). What is a case study and what is it good for? The American Political Science Review, 98(2), 341–354.

Gibbert, M., Ruigrok, W., & Wicki, B. (2008). What Passes as a Rigorous Case Study? Strategic Management Journal, 29(13), 1465–1474.

Gillham, B. (2000). Case study research methods. London ; New York: Continuum.

Jaeger, P. T., & Burnett, G. (2010). Information worlds: Social context, technology, and  information behavior in the age of the Internet. New York, NY: Routledge.

Masoner, M. (1988). An audit of the case study method. New York: Praeger.

Merriam, S. B. (1988). Case study research in education : a qualitative approach (1st ed.). San Francisco: Jossey-Bass.

Myers, M. D., & Newman, M. (2007). The qualitative interview in IS research: Examining the craft. Information and organization, 17(1), 2-26.

Ragin, C. C., & Becker, H. S. (Eds.). (1992). What is a case?: exploring the foundations of social inquiry. Cambridge university press.

Riege, A.M. (2003). Validity and reliability tests in case study research: a literature review with “hands‐on” applications for each research phase. Qualitative Market Research: An International Journal, 6(2), 75–86. http://doi.org/10.1108/13522750310470055

Roberts, E. S. (1999). In defence of the survey method: An illustration from a study of user information satisfaction. Accounting & Finance, 39(1), 53-77.

Stake, R. E. (1995). The art of case study research. Thousand Oaks: Sage Publications.

Yin, R. K. (2014). Case study research : design and methods (Fifth edition.). Los Angeles: SAGE.

Ethics Concerns

Ethical issues about qualitative studies especially in this digital era is complex and ambiguous sometimes. Thanks to Tim’s reflection, I reviewed the 2012 report from AoIR (Association of Information Researchers) ethics working committee: Ethical Decision-Making and Internet Research. It is a good supplement to what I learned from Markham’s article about fabrication.

The ethical related decision making is even harder than recalling the ethical principles such as privacy and confidentiality. It seriously intertwines with researchers’ ideological beliefs,  political and academic culture environment (IRB for example), one’s disciplinary assumptions, and one’s methodological approaches. To consider about internet ethnographic research, the first ethical concern is the privacy. For my online forum studies, the posts can be viewed publicly without login information, and thus IRB approval, according to their policy, should not be a big deal.

 

The guidelines provided by the report is quite useful, and I’d like to share with my dear classmates. Although the ambiguous debate keeps on about private/public, human/textual message, and who is human subject, the major concern should be still given to people who are vulnerable.

In practice, following questions need to be kept in mind when conducting relative online research:

1. how is the context conceptualized? what are ethical expectations related to that context?

It is very important that how participants view the venue you are investigating. If it is a closed virtual community with rigorous terms of use, more ethical concerns may be probably expected. By contrast, if encountering public data with no personal information nor stakeholders, less ethical issues need to be in mind.

2. how is the context accessed? how researcher and participants are situated in the context? Is there any need to accommodate to “perceived privacy” in ethical concerns even if it is public data?

3. What are different ethical expectations for participants and researchers in the context?

4. What is the ethical expectation for similar studies you are going to investigate?

5. How data will be securely managed, stored and represented?

6. How are finding presented?

7. Any potential harms nor grades to participants within the group?

8.  How are we recognizing the autonomy of others and acknowledging that they are of equal worth to ourselves and should be treated so?

 

 

 

Ethics in Qualitative Research

Last weekhela‘s discussion on ethics was a great reminder (we can never be reminded too much) that research comes with tremendous responsibility. The readings as well as our discussion addressed issues of fabrication, ownership of data, power structures and speaking for others.

I realize that I briefly mentioned this book before but I was recently reminded of it at a Toastmasters meeting and then during our ethics discussion. I think it’s relevant to this week’s topic in several ways… The Immortal Life of Henrietta Lacks is about a woman whose cancerous cells were taken without her or her family’s permission in 1951 at John Hopkins University during a time when patient consent was tenuous. Henrietta Lacks’ cells were the first to indefinitely thrive in culture. HeLa cells led to many medical breakthroughs, including the polio vaccine as well as advances in cancer, Parkinson’s disease, AIDS and much more. Though the HeLa cells spurred billions of dollar in pharmaceutical profit, the Lacks children had no idea of their mother’s contributions to modern science. In fact, many of them suffered from health ailments such as blindness, heart disease and diabetes but were too poor to receive proper care.

Bringing this all back to ethics…The writer, Rebecca Skloot, is a brilliant qualitative researcher who put people first and her project second. She employed rsklootarchival, ethnographic and intensive interview techniques in order to weave together Henrietta Lacks’ biography, the scientific history of HeLa production (interestingly, the HeLa saga also has roots to Tuskegee University); legal and ethical ramifications; along with the impact on the remaining members of the Lacks family.

Skloot spent 10 years tracing the story. All the while, she treated the family with the respect that they deserved. The Lacks profess that Skloot became a part of their family. I got a chance to see the Lacks family speak at a conference; they continue to have a great relationship with Skloot. The book was also the selection for our campuswide common reading initiative at the small college where I worked. The undergrads enjoyed it…or so they said! At any rate, there was so much room for the endeavor to go wrong, but I believe that Skloot’s work is an example of activist scholarship. All of this ties back to beneficence being fundamental to research.

Ethical Concerns

Ethics is one of those subjects that is interesting to talk about in the hypothetical sense, but once an ethical concern is raised, it can be difficult to work around that concern, which could jeopardize an entire research project.  My ethnographic proposal to work with the oyster harvesters of Apalachicola Bay raises at least of couple of ethical concerns.

The first concern revolves around completely shutting off the Bay to oyster harvesting to allow the oysters to fully replenish. At issue here is the ethicality of denying a person or family the ability to earn a living. Can the Bay justifiably be closed? On one hand, yes, it can, because if it is not, oyster harvesters will only be able to work in the Bay for a short while longer as the oyster population continues to decline. On the other hand, no, it is not ethical to close the Bay due to the potential immediate loss of a large portion of a individual’s or family’s yearly income. Do scientists have the authority to tell harvesters that they can’t earn money off the oysters? Does the state government have the authority to do that?

A second concern revolves around the ethics of leaving the Bay open to oyster harvesting. As mentioned in the previous paragraph, allowing full harvests to be brought to market will likely destroy the oyster population before it can replenish. If the Bay is left open, it could mean that the livelihoods of many more oyster harvesters will be put in jeopardy in a shorter time period. If a small number of harvesters are allowed to bring large oyster catches to market, it removes the possibility for other harvesters to earn money from the oysters. Eventually, few or possibly no harvesters would be able to catch enough oysters to justify the practice of oyster harvesting. Do a small number of oyster harvesters have the right to ruin the livelihoods of other harvesters?

As I work to carry this project into action, these are certainly concerns that will be raised along with a host of other potential ethical issues. However, balancing the immediate loss of personal or family income with the potential destruction of the Bay’s ecology is certainly a prominent concern as this issue is studied and debated.

Ethics Reflection: Online Ethnography

The Association of Internet Researchers’ ethical guidelines sum up the situation, “ethical conundrums are complex and rarely decided along binary lines.” It is important to note that AoIR uses the term guidelines and not code. It implies that the ground is always shifting and our perception of the internet is not stable.

All content on the internet has been created by a person. Most research involves analyzing text. Many people tend to divorce the written word from the author. This can create some uncomfortable situations if the research is current which means that the person is likely to be alive (possibly why most literary critics wait till the author is dead). A comment on the web is also not seen in the same manner as words in a book. It is more analogous to talking. This is one aspect that is attractive for ethnographic methods. However when something is said in the physical world, it is heard by those around them and then disappears forever. While online communications have the habit of staying around permanently. So when a study is published it can draw attention to a person’s comment and be easily found thus eroding a person’s anonymity or confidentiality.

Another issue is whether the internet is a public or private space. We tend to access the internet individually on our own devices. Mentally we don’t think about the fact that others are looking at the same content simultaneously. When we do think about this we tend to get squeamish. For example the people who created bit.ly had an experiment where everyone viewing a news site was able to see everyone’s mouse pointers and leave comments anywhere on the page in real time. It was a disaster. Many people left quickly and didn’t return while others chased each others’ mouse pointers around and started flame wars. For some reason we don’t like to think of the internet as being a public space but it is one.

I thought the reading about fabricating research has some good ideas on how to protect an individuals anonymity (or confidentiality), as well as, convey an online ethnographic study as taking place in a public space. One strength of qualitative research is that it provides an understanding of personal experience. Is it important that each person is kept separate or is a composite able to convey the same information? I agree that a composite does not take away from our understanding of a situation and may be able to enhance our understanding. I think the objection I was hearing from my class mates stems from the poor word choice of fabrication. One definition of fabrication is to tell lies. Telling lies is the last thing a researcher wants to be accused of doing. But what term would be better? Other building terms such as cobbling, manufacturing, and erecting are not much better. Unfortunately I don’t have a good suggestion.

On a basic level an ethical violation could be seen as any situation were someone (including the researcher) could be harmed by the disclosure of information. The AoIR states that this concept needs to be considered and defined by each researcher and not just seen as institutional hoops to jump through. This reminds me of my freshman year in college when the resident hall adviser would catch me and my friends doing something questionable. He would say, “If you know it’s wrong, then why are you doing it.”

Reflection Post: Ethics in Qualitative Research

The project I am working on for this class is just chock full of ethical concerns.  Anytime you start a conversation dealing with sexual orientation and/or gender identity, you risk bring up intensely triggering events, particularly when the issue of an individual’s coming out experience is concerned.  Many have dealt with rejection, alienation, and violence in the process of coming out, and the topic must be approached with the highest levels of sensitivity and discretion.  In structuring the research instrument for this project, I made it very clear not only that participation is voluntary (which is standard) but also that there is the option of remaining completely anonymous.  The survey would be administered online without me having met or spoken to the intended participants involved.  If they are willing, they have the option of participating further participation in one-on-one interviews, but even then, they do have the option of email, phone, or face-to-face.  While face-to-face would of course be preferred, phone and especially email offer a way to do the interview without many of the concerns of in-person interviews.  I would not see the participant, and they could, if desired, give a fake name in order to maintain full anonymity.

These precautions and structures are not ideal when trying to gather such information but they are necessary.  Many of the intended participants may not be out to anyone but themselves or a select few friends and were these not in place, there very well may be no participants from which to gather data.  Of course, these issues are not only present with LGBTQ research.  This is true of any population in which the interview, survey, or other data collection method may be triggering.

Ethics in qualitative research

The ethical considerations in qualitative research are more complex than in quantitative first because of the nature of qualitative research. Last week’s class mentioned qualitative research focuses on the meaning and on the social world as made up of systems of meaning, as well as the interpretative research of cultures and subjectivity rather than measurement. Thus it accepts the intrusion of values into research and does not necessarily aspire to law-like generalities. The ethical problems presented by the nature of qualitative research include but are not limited to:sample sizes are often too small to be useful, and the subjectivity of the researchers means that the objectivity of qualitative studies is sometimes compromised.It is essential for qualitative researchers to explain clearly in proposal and make sure the funding agencies and research ethic committees know that the emphasis in qualitative research is to capture the complexity of the cases in the sample, and to construct descriptions or interpretations or analyses which may have general relevance and value. Qualitative research is openly subjective and does not aim for objectivity in the sense of a culturally neutral vantage point.

The issues arising out of difficulties in assessing impact on the well-being of participants are another reason which makes ethics in qualitative research more complex. Qualitative research is not physically but has the potential to be a more socially and emotionally invasive form of research, so assessing impact on well-being in diverse and complex social situations presents difficulties. Also the unfolding and exploratory nature of qualitative research can leave researches unable to provide full information to participants at the initial at the initial consent consent-seeking stage. We should be aware of that it is not sufficient to establish that there is little risk of physical harm, the degree of personal and social invasiveness needs to be established,such as the shared intimacy of some types of research interview can expose or exacerbate vulnerabilities.