The 1st International Conference on Computer Science and Information Technology (ICCSIT 2018) will be held in Kuala Lumpur, Malaysia on March 19, 2018.

ICCSIT 2018 is a leading international conference for presenting novel and fundamental advances in the fields of Computer Science and Information Technology.

Call for paper: ICCSIT2018-CFP

Further info: Here

https://people.utm.my/nurazean/2018/02/04/645/

Protected: PDF Slide for Qualitative Analysis PGSS

This content is password protected. To view it please enter your password below:

Qualitative Sampling

Study Notes:

Qualitative Research: Sampling & Sample Size Considerations

Adapted from a presentation by Dr. Bonnie Nastasi,

Director of School Psychology Program

 

Sampling for Qualitative Research

Sampling, as it relates to research, refers to the selection of individuals, units, and/or settings to be studied. Whereas quantitative studies strive for random sampling, qualitative studies often use purposeful or criterion-based sampling, that is, a sample that has the characteristics relevant to the research question(s). For example, if you are interested in studying adult survivors of childhood sexual abuse, interviewing a random sample of 10 people may yield only one adult survivor, thus, you will essentially have a sample size of one and need to continue to randomly sample people until you have interviewed an appropriate number of who have survived childhood sexual abuse. This is not a wise use of your time.

 

The difference in sampling strategies between quantitative and qualitative studies is due to the different goals of each research approach. Recall that typical quantitative research seeks to infer from a sample to a population (for example, a relationship or a treatment effect). In general, you want to include a variety of types of people in a quantitative study so that it generalizes beyond those in your study. Thus, the goal of quantitative approaches can be stated as, ”empirical generalization to many.”

 

Qualitative research, on the other hand, typically starts with a specific group, type of individual, event, or process. As in the qualitative study of adult survivors of childhood sexual abuse example above, you would choose your sample very purposefully and include in your study only those with this particular experience. The goal of qualitative research can be stated as “in-depth understanding.”

 

It is true that some aspects of quantitative sampling could be relevant to a qualitative researcher. For example, if you are interested in children’s experiences of Hurricane Katrina and you have access to 3,000 school children, all of whom experienced the hurricane, you might choose to randomly sample 10 children from the 3,000 for your qualitative study. In the case of ethnographic survey research, you might even seek to obtain sample sizes similar to those in a quantitative design. It could be said, then, that there are more ambiguities than “rules” when it comes to qualitative research in general and that choosing a sampling strategy and sample size for qualitative research is no different.  What is important to remember is that the strategy you adopt will be driven by the:

  • Research question(s)/purpose
  • Time frame of your study
  • Resources available

 

Following is a list of common sampling strategies. As you read these strategies, think of which would be most relevant for your area of interest. In many cases, you will see ways to combine the strategies to create an effective approach. For example, you may use snowball sampling as a method to identify a set of extreme/deviant cases. This is an example of combination or mixed purposeful sampling. Thus these methods are not mutually exclusive; a research design may adopt a range of strategies.

 

Common Qualitative Sampling Strategies [1]

  • Extreme or Deviant Case Sampling—Looks at highly unusual manifestations of the phenomenon of interest, such as outstanding success/notable failures, top of the class/dropouts, exotic events, crises. This strategy tries to select particular cases that would glean the most information, given the research question. One example of an extreme/deviant case related to battered women would be battered women who kill their abusers.
  • Intensity Sampling—Chooses information-rich cases that manifest the phenomenon intensely, but not extremely, such as good students/poor students, above average/below average. This strategy is very similar to extreme/deviant case sampling as it uses the same logic. The difference is that the cases selected are not as extreme. This type of sampling requires that you have prior information on the variation of the phenomena under study so that you can choose intense, although not extreme, examples. For example, heuristic research uses the intense, personal experience(s) of the researcher. If one were studying jealousy, you would need to have had an intense experience with this particular emotion; a mild or pathologically extreme experience would not likely elucidate the phenomena in the same way as an intense experience.
  • Maximum Variation Sampling—Selects a wide range of variation on dimensions of interest. The purpose is to discover/uncover central themes, core elements, and/or shared dimensions that cut across a diverse sample while at the same time offering the opportunity to document unique or diverse variations. For example, to implement this strategy, you might create a matrix (of communities, people, etc.) where each item on the matrix is as different (on relevant dimensions) as possible from all other items.
  • Homogeneous Sampling—Brings together people of similar backgrounds and experiences. It reduces variation, simplifies analysis, and facilitates group interviewing. This strategy is used most often when conducting focus groups. For example, if you are studying participation in a parenting program, you might sample all single-parent, female head of households.
  • Typical Case Sampling—Focuses on what is typical, normal, and/or average. This strategy may be adopted when one needs to present a qualitative profile of one or more typical cases. When using this strategy you must have a broad consensus about what is “average.” For example, if you were working to begin development projects in Third World countries, you might conduct a typical case sampling of “average” villages. Such a study would uncover critical issues to be addressed for most villages by looking at the ones you sampled.
  • Critical Case Sampling—Looks at cases that will produce critical information. In order to use this method, you must know what constitutes a critical case. This method permits logical generalization and maximum application of information to other cases because if it’s true of this one case, it’s likely to be true of all other case. For example, if you want to know if people understand a particular set of federal regulations, you may present the regulations to a group of highly educated people (“If they can’t understand them, then most people probably cannot”) and/or you might present them to a group of under-educated people (“If they can understand them, then most people probably can”).
  • Snowball or Chain Sampling—Identifies cases of interest from people who know people who know what cases are information-rich, that is, who would be a good interview participant. Thus, this is an approach used for locating information-rich cases. You would begin by asking relevant people something like: “Who knows a lot about ___?” For example, you would ask for nominations, until the nominations snowball, getting bigger and bigger. Eventually, there should be a few key names that are mentioned repeatedly.
  • Criterion Sampling—Selects all cases that meet some criterion. This strategy is typically applied when considering quality assurance issues. In essence, you choose cases that are information-rich and that might reveal a major system weakness that could be improved. For example, if the average length of stay for a certain surgical procedure is three days, you might set a criterion for being in the study as anyone whose stay exceeded three days. Interviewing these cases may offer information related to aspects of the process/system that could be improved.
  • Theory-Based or Operational Construct or Theoretical Sampling—dentifies manifestations of a theoretical construct of interest so as to elaborate and examine the construct. This strategy is similar to criterion sampling, except it is more conceptually focused. This strategy is used in grounded theory studies. You would sample people/incidents, etc., based on whether or not they manifest/represent an important theoretical or operational construct. For example, if you were interested in studying the theory of “resiliency” in adults who were physically abused as children, you would sample people who meet theory-driven criteria for “resiliency.”
  • Confirming and Disconfirming SamplingSeeks cases that are both “expected” and the “exception” to what is expected. In this way, this strategy deepens initial analysis, seeks exceptions, and tests variation. In this strategy you find both confirming cases (those that add depth, richness, credibility) as well as disconfirming cases (example that do not fit and are the source of rival interpretations). This strategy is typically adopted after initial fieldwork has established what a confirming case would be. For example, if you are studying certain negative academic outcomes related to environmental factors, like low SES, low parental involvement, high teacher to student ratios, lack of funding for a school, etc. you would look for both confirming cases (cases that evidence the negative impact of these factors on academic performance) and disconfirming cases (cases where there is no apparent negative association between these factors and academic performance).
  • Stratified Purposeful Sampling—Focuses on characteristics of particular subgroups of interest; facilitates comparisons. This strategy is similar to stratified random sampling (samples are taken within samples), except the sample size is typically much smaller. In stratified sampling you “stratify” a sample based on a characteristic. Thus, if you are studying academic performance, you would sample a group of below average performers, average performers, and above average performers. The main goal of this strategy is to capture major variations (although common themes may emerge).
  • Opportunistic or Emergent Sampling—Follows new leads during fieldwork, takes advantage of the unexpected, and is flexible. This strategy takes advantage of whatever unfolds as it is unfolding, and may be used after fieldwork has begun and as a researcher becomes open to sampling a group or person they may not have initially planned to interview. For example, you might be studying 6th grade students’ awareness of a topic and realize you will gain additional understanding by including 5th grade students’ as well.
  • Purposeful Random Sampling—Looks at a random sample. This strategy adds credibility to a sample when the potential purposeful sample is larger than one can handle. While this is a type of random sampling, it uses small sample sizes, thus the goal is credibility, not representativeness or the ability to generalize. For example, if you want to study clients at a drug rehabilitation program, you may randomly select 10 of 300 current cases to follow. This reduces judgment within a purposeful category, because the cases are picked randomly and without regard to the program outcome.
  • Sampling Politically Important Cases—Seeks cases that will increase the usefulness and relevance of information gained based on the politics of the moment. This strategy attracts attention to the study (or avoids attracting undesired attention by purposefully eliminating from the sample politically sensitive cases). This strategy is a variation on critical case sampling. For example, when studying voter behavior, one might choose the 2000 election, not only because it would provide insight, but also because it would likely attract attention.
  • Convenience Sampling—Selects cases based on ease of accessibility. This strategy saves time, money, and effort, however, has the weakest rationale along with the lowest credibility. This strategy may yield information-poor cases because cases are picked simply because they are easy to access, rather than on a specific strategy/rationale. Sampling your co-workers, family members or neighbors simply because they are “there” is an example of convenience sampling.
  • Combination or Mixed Purposeful Sampling—Combines two or more strategies listed above. Basically, using more than one strategy above is considered combination or mixed purposeful sampling. This type of sampling meets multiple interests and needs. For example, you might use chain sampling in order to identify extreme or deviant cases. That is, you might ask people to identify cases that would be considered extreme/deviant and do this until you have consensus on a set of cases that you would sample.

 

Sample Sizes: Considerations

When determining sample size for qualitative studies, it is important to remember that there are no hard and fast rules. There are, however, at least two considerations:

 

  1. What sample size will reach saturation or redundancy? That is, how large does the sample need to be to allow for the identification of consistent patterns? Some researchers say the size of the sample should be large enough to leave you with “nothing left to learn.” In other words, you might conduct interviews, and after the tenth one, realize that there are no new concepts emerging. That is, the concepts, themes, etc. begin to be redundant.

 

  1. How large a sample is needed to represent the variation within target population? That is, how large must a sample be to in order to assess an appropriate amount of diversity or variation that is represented in the population of interest?

 

You may estimate sample size, based on the approach of the study or the data collection method used. For each category there are some related rules of thumb, represented in the tables below.

 

 

Rules of Thumb Based on Approach:

 

Research Approach Rule of Thumb
Biography/Case Study Select one case or one person.
Phenomenology Assess 10 people. If you reach saturation prior to assessing ten people you may use fewer.
Grounded theory/ethnography/action research Assess 20-30 people, which typically is enough to reach saturation.

 

 

Rules of Thumb Based on Data Collection Method:

 

Data Collection Method Rule of Thumb
Interviewing key informants Interview approximately five people.
In-depth interviews

 

Interview approximately 30 people.
Focus groups

 

Create groups that average 5-10 people each. In addition, consider the number of focus groups you need based on “groupings” represented in the research question. That is, when studying males and females of three different age groupings, plan for six focus groups, giving you one for each gender and three age groups for each gender.
Ethnographic surveys

 

Select a large and representative sample (purposeful or random based on purpose) with numbers similar to those in a quantitative study.

 

 

There should also be consideration of the size of a good database: one that will yield data that are of sufficient quality and quantity. While the quality of the data is impacted by the quality of the interview protocol, the quantity of data is also a factor. For example, with a well conceived interview protocol, a 10-20 hour database should provide enough data to support a solid qualitative dissertation.  In this case, the following chart can be used:

Guidelines for Length of Interviews:

 

Number of Interviews Length of each interview
10 1 – 2 hours
20 30 minutes – 1 hour
30 20 – 40 minutes

 

Adjustments may be made if there are other forms of qualitative data collection involved.  For example, if there is a 2 -hour focus group and 10 interviews, the duration of the interviews might be shortened.

 

Conclusion

Regardless of the strategy or strategies you adopt for a study, and/or the sample size you plan for, you need to provide a rationale for your choices by articulating the expected benefits and weaknesses of any strategy/sample size you choose. A key component of any qualitative research design is flexibility. Accordingly, if you choose a qualitative research design, you must have high tolerance for ambiguity.

 

References

Camic, P.  M, Rhodes, J.  E., & & Yardley, L. (Ed.). (2003). Qualitative research in psychology: Expanding perspectives in methodology and design. Washington, DC: American Psychological Association.

Creswell, J. W. (1998). Qualitative Inquiry & Research Design: Choosing Among Five Traditions. Thousand Oaks: CA. Sag Publications, Inc.

Dey, I. (1999). Grounding grounded theory: Guidelines for qualitative inquiry. San Diego, CA: Academic Press.

Harter, S. (1978). Effectance motivation reconsidered: Toward a developmental model. Human Development, 21, 34–64.

Harter, S. (1999). The construction of the self: A developmental perspective. New York: Guilford.

Hitchcock, J. H, Nastasi, B. K., Dai, D. C., Newman, J., Jayasena, A., Bernstein-Moore, R., Sarkar, S., & Varjas, K. (2004). Illustrating a mixed-method approach for identifying and validating culturally specific constructs. Accepted for publication in Journal of School Psychology.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Thousand Oaks: CA. Sage Publications, Inc.

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis (2nd ed.). Thousand Oaks, CA: Sage Publications, Inc.

Nastasi, B.K., Moore, R. B., & Varjas, K. M. (2004). School-Based Mental Health Services: Creating Comprehensive and Culturally Specific Programs. Washington, DC: American Psychological Association.

Nastasi, B. K., Varjas, K., Sarkar, S., & Jayasena, A. (1998). Participatory model of mental health programming: Lessons learned from work in a developing country. School Psychology Review, 27 (2), 260–276.

Patton, M. Q. (2001). Qualitative evaluation and research methods (3rd ed.). Newbury Park, CA: Sage Publications, Inc.

Sarkar, S. (2003). Gender as a cultural factor influencing mental health among adolescent students in India and Sri Lanka. Unpublished doctoral dissertation, University at Albany, SUNY.

Schensul, J. J., & LeCompte, M. D. (Eds.). (1999). Ethnographer’s toolkit: Volumes 1–7. Walnut Creek, CA: AltaMira Press.

Varjas, K. M. (2003). A participatory culture-specific consultation (PCSC) approach to intervention development. Unpublished doctoral dissertation, University at Albany, SUNY.

Wolcott, H. F. (1990). Writing up qualitative research. Thousand Oaks: CA. Sage Publications, Inc.

 

[1] Patton, M. Q. (2001). Qualitative evaluation and research methods (3rd ed.). Newbury Park, CA: Sage Publications.

Qualitative Research Methods Overview

 

 

 

Introduction to Qualitative Research

What is qualitative research?
Qualitative research is a type of scientific research. In general terms, scientific research consists
of an investigation that:
• seeks answers to a question
• systematically uses a predefined set of procedures to answer the question
• collects evidence
• produces findings that were not determined in advance
• produces findings that are applicable beyond the immediate boundaries of the study
Qualitative research shares these characteristics. Additionally, it seeks to understand a given
research problem or topic from the perspectives of the local population it involves. Qualitative
research is especially effective in obtaining culturally specific information about the values,
opinions, behaviors, and social contexts of particular populations.

 

Read more here

Academics ‘face higher mental health risk’ than other professions

Lack of job security, limited support from management and weight of work-related demands on time among risk factors
Academic with his heads in his hands
Source: Rex

The majority of people working at universities find their job stressful, and academics are more prone to developing common mental health disorders than those working in other professions, according to a systematic review of published work on researchers’ well-being.

A lack of job security, limited support from management and the weight of work-related demands on their time were among the factors listed as affecting the health of those who work in higher education.

The report, commissioned by the Royal Society and the Wellcome Trust, urges institutions to work more closely with the UK’s regulator on health and safety in the workplace to address the risks to staff well-being.

For the study, research institute RAND Europe conducted a literature review to find out what is known about mental health in researchers, and identified 48 studies, which it analysed for the report entitled Understanding Mental Health in the Research Environment.

“Survey data indicate that the majority of university staff find their job stressful. Levels of burnout appear higher among university staff than in general working populations and are comparable to ‘high-risk’ groups such as healthcare workers,” write Susan Guthrie, a research leader at RAND, and colleagues in the report.

About 37 per cent of academics have common mental health disorders, which is a high level compared with other occupational groups. More than 40 per cent of postgraduate students report depression symptoms, emotional or stress-related problems or high levels of stress, they say.

“In large-scale surveys, UK higher education staff have reported worse well-being than staff in other types of employment in the areas of  work demands, change management, support provided by managers and clarity about one’s role,” the report says.

Real and perceived job insecurity is an important issue for researchers, particularly those at the start of their careers who are often employed on a series of short-term contracts, the report adds.

Dr Guthrie and colleagues found that staff who devoted a lot of their working time to research experienced less stress than those who did not. But it was not clear whether this reduction in stress was related to the seniority of scientists who are able to spend time more on their research.

Among the report’s conclusions is a call for universities to work with the Health and Safety Executive to help address workplace stress. The organisation has issued management standards that describe how workplaces can identify and mitigate stress at an organisational level, they say.

“It could be useful to work through that approach with a university or a research organisation to identify the mechanisms at play in those environments. Doing so could establish the relevance of the approach in this context, and potentially provide a model that could be used more widely in the sector,” they add.

holly.else@timeshighereducation.com

Expensive academic conferences give us old ideas and no new faces

Dear All, Please be more choosy when making decision on going to a conference.

Conferences have been held since the early days of academia. But their size has changed dramatically. The intimate gatherings of academics from a specific field have now been replaced with mega conferences, frequently featuring 1,000 participants or more. Stockholm World Water Week, which brings together scholars and practitioners, counts more than 3,000 participants.

These gatherings are fancier than ever. Academic conferences used to be in universities. Yet the last annual meeting of the American Association of Geographers, the world’s largest geography conference, was in the Sheraton Boston, a four-star venue. Many now also feature elaborate social programmes. Those who attend World Water Week must choose whether the cocktail reception on Monday night, the royal banquet on Wednesday night or the “mingle and dance” on Thursday night is the conference’s main see-and-be-seen event.

Some of these events are financed via conference fees. Scholars from the United Kingdom had to pay a registration fee of £530 to attend the most recent International Sustainable Development Research Society Conference, a large conference on sustainable development. Add £850 for flights (the conference was held in Colombia), £300 for the Airbnb and £100 for miscellaneous items such as in-transit Wi-Fi. The costs are roughly equal to the monthly net salary of a post-doc in the UK.

Conference grants are difficult to obtain and can be miniscule. Many early-career scholars struggle to attend academic conferences. Financing the visit of an academic conference can be a challenge even for tenured academics. An associate professor from Frostburg University, United States, reported that her institution only equips her with £150 annually for conference travel – not even enough to pay 20% of this year’s registration fee for world water week.

Those who manage to attend academic conferences expect many benefits. They hope to find their next collaborators. They hope to broaden their horizons to develop new research ideas. Conferences which mix practitioners with academics frequently also aspire to impact policies. This year’s world water week hopes to find novel hands-on solutions to waste water reduction and reuse.

However, our experience suggests that conferences usually do not deliver on these promises. There are always the same old faces, with a few more wrinkles every year, using obfuscating jargon to present the same old stuff. We’ve seen papers featured at conferences in recent years that could have easily come from the 1960s or 1970s. This not the research we think will deliver clean and safe water and improve sanitation for the millions who need it.

Unknown faces would come to these conferences, not just the academic
bourgeoisie. Meanwhile, more rigorous peer-review of conference abstracts may decrease the number of participants, but could help to ensure that the work presented is thought-provoking. These types of conferences may even impact policies.

Slowly, academics have started experimenting with the current conference format. Seminar leaders at World Water Week now feedback presentations prior to the conference to enhance their understandability. Meanwhile, the Feminist and Women’s Studies Association of the UK and Ireland will hold a conference that is entirely virtual this early September. Many more of these initiatives are needed, though.

Most academic conferences are oversized. Even the privileged few that can attend them rarely find at them what they hoped for. The academy frequently claims that it is a champion of social justice and diversity. But the academic conference business underscores the hypocrisy of this claim.

Original Source Click HERE.

The 7th International Conference on Information Technology and Multimedia (ICIMU 2017)

The 7th International Conference on Information Technology and Multimedia (ICIMU 2017) organized by the College of Computer Science and Information Technology, Universiti Tenaga Nasional (UNITEN) will be held on the 8th & 9th of November 2017 at Hotel Bangi-Putrajaya, Selangor, Malaysia. The conference provides a platform for researchers, academia, engineers, standardization bodies, government officials and practitioners to dialogue and exchange ideas on recent research and development in various areas of computing.

The theme for this edition of the conference is “Powering Information Society through Data Analytics

We invite you to submit your unpublished, original research work to our conference. Submitted papers may address technical or non-technical aspects of data analytics and other areas of computing. Relevant papers that are not explicitly addressing the theme are also welcome.

All accepted papers will be published in any of the following selected Thomson Reuters (Web of Science) index journals:

  1. International Journal of Future Generation Communication & Networking (IJFGCN)
    ISSN: 2233-7857
  2. International Journal of Security and Its Applications (IJSIA)
    ISSN: 1738-9976
  3. International Journal of Advanced and Applied Sciences (IJAAS)
    ISSN: 2313-626X
  4. International Journal of Interactive Multimedia and Artificial Intelligence (IJIMAI)
    ISSN: 1989-1660

Further info Click HERE.