VALIDITY EVIDENCE BASED ON CONTENT OF A SCIENTIFIC LITERACY ASSESSMENT INSTRUMENT

Authors

DOI:

https://doi.org/10.18764/2178-2229v29n2.2022.27

Keywords:

scientific literacy, assessment, validity evidence based on content

Abstract

The study aims to collect validity evidence based on the content of a pilot scientifi c literacy assess-ment instrument. The data collection was carried out in seven stages: defi nition of the cognitive domains, defi -nition of the universe and the representation of the content, elaboration of the specifi cation table, instrument construction, theoretical analysis of the items and empirical analysis of the items. Thirty-fi ve items were se-lected that assess the understanding, analyzing, and evaluating competences present in the main Portuguese curriculum documents. The pilot test was applied to 176 students from eight schools in southern Portugal. The empirical analysis revealed the presence of 14 very easy items and seven very difficult, which should be revised to adapt the instrument to the target population’s abilities.

Downloads

Download data is not yet available.

Author Biographies

Marcelo Alves Coppi, Centro de Investigação em Educação e Psicologia da Universidade de Évora (CIEP-UE)

Bolseiro de investigação no Centro de Investigação em Educação e Psicologia (CIEP).

Isabel Fialho, Centro de Investigação em Educação e Psicologia da Universidade de Évora (CIEP-UE)

Professora Auxiliar na Universidade de Évora - Centro de Investigação em Educação e Psicologia.

Marília Cid, Centro de Investigação em Educação e Psicologia da Universidade de Évora (CIEP-UE)

Professora Associada na Universidade de Évora - Centro de Investigação em Educação e Psicologia.

References

AAAS. Project 2061: science for all americans. Washington, DC: Oxford University Press, 1989.

AAAS. Project 2061: benchmarks for science literacy. Washington, DC: Oxford University Press, 1993.

AERA; APA; NCME. Standards for educational and psychological testing. Washington, DC: American Educational Research Association, 2014.

ALEXANDRE, N. M. C.; COLUCI, M. Z. O. Validade de conteúdo nos processos de construção e adaptação de instrumentos de medidas. Ciência & Saúde Coletiva, v. 16, n. 7, p. 3061–3068, 2011.

ANDERSON, L. W. et al. A taxonomy for learning, teaching and assessing: a revison of Bloom’s taxonomy of educational objectives. New York: Addison Wesley Longman, 2001.

ANDRADE, D. F.; TAVARES, H. R.; VALLE, R. DA C. Teoria da resposta ao item: conceitos e apli-cações. São Paulo: ABE - Associação Brasileira de Estatística, 2000.

ARAUJO, E. A. C.; ANDRADE, D. F.; BORTOLOTTI, S. L. V. Teoria da resposta ao item. Revista da Escola de Enfermagem USP, v. 3, n. especial, p. 1000–1008, 2009.

BAKER, F. B. The basics of item response theory. Washington, DC: ERIC, 2001.

BERK, R. A. A consumer’s guide to multiple choice item formats that measure complex cognitive outcomes. In: National evaluation systems (Eds.). From policy to practice. Amherst, MA: Pearson Publishing, 1996. p. 101–127.

BROOKHART, S. M. The art and science of classroom assessment: the missing part of pedago-gy. Washington, DC: The George Washington University, Graduate School of Education and Human Development, 1999.

BURTON, R. F.; MILLER, D. J. Statistical modelling of multiple/choice and true/false tests: ways of considering, and of reducing, the uncertainties attributable to guessing. Assessment & Evaluation in Higher Education, v. 24, n. 4, p. 399–411, 1999.

CETIC. Pesquisa sobre o uso das tecnologias da informação e comunicação nas escolas brasileiras – TIC Educação 2013. São Paulo: Comitê Gestor da Internet no Brasil, 2014.

CHANDRATILAKE, M.; DAVIS, M.; PONNAMPERUMA, G. Assessment of medical knowledge: The pros and cons of using true/false multiple choice questions. Medical Education, v. 24, n. 4, p. 225–228, 2011.

CIZEK, G. J. Validating test score meaning and defending test score use: different aims, different methods. Assessment in Education: Principles, Policy and Practice, v. 23, n. 2, p. 212–225, 2016.

DEPRESBITERIS, L.; TAVARES, M. R. Diversificar é preciso...:Instrumentos e técnicas de avalia-ção de aprendizagem. São Paulo: Senac São Paulo, 2009.

DGE. Aprendizagens Essenciais. Disponível em: <https://www.dge.mec.pt/aprendizagens-essen-ciais-0>. Acesso em: 18 out. 2019.

EBEL, R. L. The comparative eftectiveness of true-false and multiple choice achievement test items. American Educational Rsearch Association Annual Meeting. Anais...New York: 1971.

EBEL, R. L. Essentials of educational measurement. 3. ed. Englewood Cliffs: Prentice Hail Inter-national, Inc., 1979.

EBEL, R. L.; FRISBIE, D. A. Essentials of educational measurement. 5. ed. Englewood Cliffs: Prentice Hail International, Inc., 1991.

FERRAZ, A. P. C. M.; BELHOT, R. V. Taxonomia de Bloom: revisão teórica e apresentação das adequações do instrumento para definição de objetivos instrucionais. Gest. Prod., v. 17, n. 2, p. 421–431, 2010.

FERREIRA, E. A. Teoria de tesposta ao item – TRI: análise de algumas questões do ENEM: habi-lidades 24 a 30. Dissertação de Mestrado, Universidade Federal da grande Dourados, Mato Grosso do Sul, Brasil, 2018.

FIVES, H. et al. Developing a measure of scientific literacy for middle school students. Science Education, v. 98, n. 4, p. 549–580, 2014.

FRISBIE, D. A. Multiple choice versus true-false: a comparison of reliabilities and current validities. Journal of Educational Measurement, v. 10, n. 4, p. 297–304, 1973.

FRISBIE, D. A.; BECKER, D. F. An analysis of textbook advice about true-false tests. Applied Mea-surement in Education, v. 4, n. 1, p. 67–83, 1991.

GALVÃO, C. et al. Ciências físicas e naturais - orientações curriculares para o 3o ciclo do en-sino básico. Lisboa: Ministério da Educação, 2001.

GATES, F. R.; HOYER, W. D. Measuring miscomprehension: a comparison of alternate formats. In: LUTZ, R. J. (Ed.). Advances in Consumer Research. Provo, UT: Association for Consumer Re-search, 1986. p. 143–146.

GIPPS, C. V. Beyond testing: towards a theory of educational assessment. Washington, DC: The Falmer Press, 2003.

GORMALLY, C.; BRICKMAN, P.; LUTZ, M. Developing a test of scientific literacy skills (TOSLS): measuring undergraduates’ evaluation of scientific information and arguments. CBE Life Sciences Education, v. 11, n. 4, p. 364–377, 2012.

HALADYNA, T. M. Developing and validating multiple-choice test items. 3. ed. London: Lawren-ce Erlbaum Associates, 2004.

HALADYNA, T. M. Developing test items for course examinations. IDEA, v. 70, p. 1–16, 2018.

HALADYNA, T. M.; RODRIGUEZ, M. C. Developing and validating test items. New York: Taylor & Francis Group, 2013.

KANE, M. T. Validating the interpretations and uses of test scores. Journal of Educational Measu-rement, v. 50, n. 1, p. 1–73, 2013.

LAUGKSCH, R. C. Scientific literacy: a conceptual overview. Science Education, v. 84, n. 1, p. 71–94, 2000.

LAUGKSCH, R. C.; SPARGO, P. E. Construction of a paper-and-pencil test of basic scientific literacy based on selected literacy goals recommended by the american association for the advancement of science. Public Understanding of Science, v. 5, n. 4, p. 331–359, 1996a.

LAUGKSCH, R. C.; SPARGO, P. E. Development of a pool of scientific literacy test-items based on selected AAAS literacy goals. Science Education, v. 80, n. 2, p. 121–143, 1996b.

MAIHOFF, N. A.; MEHRENS, W. A. A comparison of alternate-choice and true-false item forms used in classroom examinations. Annual Researchers Meeting of the National Council on Mea-surement in Evaluation. Anais...Illinois: 1985.

MARTINS, G. O. et al. Perfil dos alunos à saída da escolaridade obrigatória. Lisboa: Ministério da Educação e Ciência - DGE, 2017.

MESSICK, S. Validity. In: LINN, R. L. (Ed.). Educational measurement. 3. ed. New York: Macmil-lan, 1989. p. 3–209.

MILLER, J. D. Scientific literacy: a conceptual and empirical review. Daedalus, v. 112, n. 2, p. 29–48, 1983.

PASQUALI, L. Psicometria teoria dos testes na psicologia e na educação. 4. ed. Petrópolis: Vozes, 2009a.

PASQUALI, L. Psicometria. Revista da Escola de Enfermagem da USP, v. 43, n. spe, p. 992–999, 2009b.

POPHAM, W. J. Classroom assessment: what teachers need to know. 8. ed. Los Angeles: Pear-son, 2017.

RAYMUNDO, V. P. Construção e validação de instrumentos um desafio para a psicolinguística. Le-tras de Hoje, v. 44, n. 3, p. 86–93, 2009.

RUBIO, D. M. G. et al. Objectifyng content validity: Conducting a content validity study in social work research. Social Work Research, v. 27, n. 2, p. 94–104, 2003.

RUSH, B. R.; RANKIN, D. C.; WHITE, B. J. The impact of item-writing flaws and item complexity on examination item difficulty and discrimination value. BMC Medical Education, v. 16, n. 250, p. 1–10, 2016.

RUSSEL, M. K.; AIRASIAN, P. W. Avaliação em sala de aula: conceitos e aplicações. 7. ed. Porto Alegre: AMGH, 2014.

SERRA, P.; GALVÃO, C. Evolução do currículo de ciências em Portugal: será Bloom incontornável? Interações, v. 11, n. 39, p. 255–271, 2015.

SMITH, J. K. Reconsidering reliability in classroom assessment and grading. Educational Measu-rement: Issues and Practice, v. 22, n. 4, p. 26–33, 2003.

TASDEMIR, M. A comparison of multiple-choice tests and true-false tests used in evaluating student progress. Journal of Instnjctional Psychology, v. 37, n. 3, p. 258–267, 2010.

Published

2022-07-05

How to Cite

COPPI, Marcelo Alves; FIALHO, Isabel; CID, Marília.
VALIDITY EVIDENCE BASED ON CONTENT OF A SCIENTIFIC LITERACY ASSESSMENT INSTRUMENT
. Cadernos de Pesquisa, v. 29, n. 2, p. 99–127, 5 Jul. 2022 Disponível em: https://periodicoseletronicos.ufma.br/index.php/cadernosdepesquisa/article/view/17211. Acesso em: 22 nov. 2024.

Issue

Section

Artigos