Topic outline

  • General

  • بطاقة معلومات المقياس

    ى

    مقياس : البرمجة والذكاء الاصطناعي1

    السداسي: الأول

    اسم الوحدة: افقية

    الرصيد:2

    المعامل:2

    التقييم: امتحان

  • بطاقة معلومات الأستاذ

    الأستاذ: خيري نوح

    البريد الالكتروني: nouh.kheiri@univ-djelfa.dz

    أوقات التواجد في الكلية: من يوم السبت حتى الثلاثاء

    ة

  • المكتسبات القبلية

    • الإلمام بأساسيات الرياضيات والمنطق.

    • معرفة أولية بـ الخوارزميات وهياكل المعطيات.

    • القدرة على البرمجة بلغة واحدة على الأقل (مثل Python أو C++).

    • مهارات في التحليل وحل المشكلات.

    • أساسيات في استخدام الحاسوب وأنظمة التشغيل.

    ظ

  • اختبار المكتسبات القبلية

    Quiz: 1
  • الأهداف

    • تمكين الطلبة من إتقان أسس البرمجة وتصميم الخوارزميات.

    • تعريفهم بمفاهيم الذكاء الاصطناعي وتقنياته الأساسية (مثل التعلم الآلي والمنطقي).

    • تطوير مهارات حل المشكلات باستخدام البرمجة الذكية.

    • تعزيز القدرة على التفكير التحليلي والإبداعي في تصميم الأنظمة الذكية.

    • إعداد الطلبة لتطبيق الذكاء الاصطناعي في مجالات واقعية كالأتمتة وتحليل البيانات واتخاذ القرار.

    ف

  • فهرس المقرر الدراسي

    محاضرة 1: ما الذكاء الاصطناعي؟ وكيف “يفكّر” الحاسوب في قضايا المجتمع؟

    محاضرة 2: البيانات كوقود — مصادرها، جودتها، وتحيّزاتها

    محاضرة 3: من سؤال البحث إلى نموذج — رحلة منهجية مبسّطة لعالِم الاجتماع

    محاضرة 4: تعلّم مُراقَب وتعلّم غير مُراقَب

    محاضرة 5: النماذج اللغوية الكبيرة وهندسة الأوامر — قراءة اجتماعية موسَّعة

    محاضرة 6: تحليل النصوص الاجتماعية (مفهوميًا)


  • محاضرة 1: ما الذكاء الاصطناعي؟ وكيف “يفكّر” الحاسوب في قضايا المجتمع؟

    File: 1 Chat: 1 Forum: 1
  • محاضرة 2: البيانات كوقود — مصادرها، جودتها، وتحيّزاتها

    File: 1 Chat: 1 Forum: 1
  • محاضرة 3: من سؤال البحث إلى نموذج — رحلة منهجية مبسّطة لعالِم الاجتماع

    File: 1 Chat: 1 Forum: 1
  • محاضرة 4: تعلّم مُراقَب وتعلّم غير مُراقَب

    File: 1 Chat: 1 Forum: 1
  • محاضرة 5: النماذج اللغوية الكبيرة وهندسة الأوامر — قراءة اجتماعية موسَّعة

    File: 1 Chat: 1 Forum: 1
  • محاضرة 6: تحليل النصوص الاجتماعية (مفهوميًا)

    File: 1 Chat: 1 Forum: 1
  • اختبار الخروج

    ى

    Quiz: 1
  • قائمة المصادر والمراجع

    1.      Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623.

    2.      boyd, d., & Crawford, K. (2012). Critical questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679.

    3.      Breiman, L. (2001). Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical Science, 16(3), 199–231.

    4.      Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

    5.      Domingos, P. (2015). The master algorithm: How the quest for the ultimate learning machine will remake our world. Basic Books.

    6.      Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

    7.      Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92.

    8.      Lazer, D., Pentland, A., Adamic, L., Aral, S., Barabási, A.-L., Brewer, D., … Van Alstyne, M. (2009). Computational social science. Science, 323(5915), 721–723.

    9.      Lipton, Z. C. (2018). The mythos of model interpretability. Queue, 16(3), 31–57.

    10.  Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. Farrar, Straus and Giroux.

    11.  Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 220–229).

    12.  O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.

    13.  Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33–44).

    14.  Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson.

    15.  Suresh, H., & Guttag, J. V. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. In Proceedings of the 2021 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’21). Bethlehem, J. (2010). Selection bias in web surveys. International Statistical Review, 78(2), 161–188.

    16.  boyd, d., & Crawford, K. (2012). Critical questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679.

    17.  Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

    18.  Denzin, N. K. (1978). The research act: A theoretical introduction to sociological methods (2nd ed.). McGraw-Hill.

    19.  Groves, R. M., Fowler Jr., F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey methodology (2nd ed.). Wiley.

    20.  Kvale, S., & Brinkmann, S. (2009). InterViews: Learning the craft of qualitative research interviewing (2nd ed.). Sage.

    21.  Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage.

    22.  Little, R. J. A., & Rubin, D. B. (2019). Statistical analysis with missing data (3rd ed.). Wiley.

    23.  Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. Farrar, Straus and Giroux.

    24.  Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 220–229).

    25.  O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.

    26.  Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33–44).

    27.  Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson.
    Salganik, M. J. (2017). Bit by bit: Social research in the digital age. Princeton University Press.

    28.  Suresh, H., & Guttag, J. V. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. In Proceedings of the 2021 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’21).

    29.  van Buuren, S. (2018). Flexible imputation of missing data (2nd ed.). CRC Press.

    30.  Arlot, S., & Celisse, A. (2010). A survey of cross-validation procedures for model selection. Statistics Surveys, 4, 40–79.

    31.  boyd, d., & Crawford, K. (2012). Critical questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679.

    32.  Breiman, L. (2001). Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical Science, 16(3), 199–231.

    33.  Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

    34.  Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92.

    35.  Groves, R. M., Fowler Jr., F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey methodology (2nd ed.). Wiley.

    36.  Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems, 29, 3315–3323.

    37.  Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS 2017), 43:1–43:23. (Also available as arXiv:1609.05807)

    38.  Krippendorff, K. (2018). Content analysis: An introduction to its methodology (4th ed.). Sage.

    39.  Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage.

    40.  Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. Farrar, Straus and Giroux.

    41.  O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.

    42.  Quiñonero-Candela, J., Sugiyama, M., Schwaighofer, A., & Lawrence, N. D. (Eds.). (2009). Dataset shift in machine learning. MIT Press.

    43.  Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33–44).

    44.  Rudin, C. (2019). Stop explaining black box machine learning models for high-stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215.

    45.  Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson.

    46.  Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin.

    47.  Shmueli, G. (2010). To explain or to predict? Statistical Science, 25(3), 289–310.

    48.  Suresh, H., & Guttag, J. V. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. In Proceedings of the 2021 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’21). Arlot, S., & Celisse, A. (2010). A survey of cross-validation procedures for model selection. Statistics Surveys, 4, 40–79.

    49.  Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots. Proceedings of FAccT 2021, 610–623.

    50.  Blei, D. M. (2012). Probabilistic topic models. Communications of the ACM, 55(4), 77–84.
    Crawford, K. (2021). Atlas of AI. Yale University Press.

    51.  Fawcett, T. (2006). An introduction to ROC analysis. Pattern Recognition Letters, 27(8), 861–874.
    Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An ethical framework. Minds and Machines, 28(4), 689–707.

    52.  Gebru, T., Morgenstern, J., Vecchione, B., et al. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92.

    53.  Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. NeurIPS 29, 3315–3323.

    54.  He, H., & Garcia, E. A. (2009). Learning from imbalanced data. IEEE TKDE, 21(9), 1263–1284.

    55.  James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An Introduction to Statistical Learning. Springer.

    56.  Jolliffe, I. T., & Cadima, J. (2016). Principal component analysis: A review. Philosophical Transactions A, 374(2065), 20150202.

    57.  Kaufman, L., & Rousseeuw, P. J. (2009). Finding groups in data: An introduction to cluster analysis. Wiley.

    58.  Lipton, Z. C. (2018). The mythos of model interpretability. Queue, 16(3), 31–57.
    Lazer, D., Pentland, A., Adamic, L., et al. (2009). Computational social science. Science, 323(5915), 721–723.

    59.  Mitchell, M. (2019). Artificial Intelligence: A Guide for Thinking Humans. Farrar, Straus and Giroux.

    60.  Niculescu-Mizil, A., & Caruana, R. (2005). Predicting good probabilities with supervised learning. ICML, 625–632.

    61.  O’Neil, C. (2016). Weapons of Math Destruction. Crown.
    Rousseeuw, P. J. (1987). Silhouettes: A graphical aid to the interpretation of cluster analysis. Journal of Computational and Applied Mathematics, 20, 53–65.

    62.  Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.

    63.  Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information Processing & Management, 45(4), 427–437.

    64.  Suresh, H., & Guttag, J. V. (2021). A framework for understanding sources of harm throughout the ML life cycle. EAAMO ’21.

    65.  Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623.

    66.  Blodgett, S. L., Barocas, S., Daumé III, H., & Wallach, H. (2020). Language (technology) is power: A critical survey of “bias” in NLP. Proceedings of the ACL, 5454–5476.

    67.  Bommasani, R., Hudson, D. A., Adeli, E., et al. (2021). On the opportunities and risks of foundation models. arXiv:2108.07258.

    68.  Brown, T. B., Mann, B., Ryder, N., et al. (2020). Language models are few-shot learners. NeurIPS, 33, 1877–1901.

    69.  Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

    70.  Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv:1702.08608.

    71.  Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707.

    72.  Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92.

    73.  Kvale, S., & Brinkmann, S. (2009). InterViews: Learning the craft of qualitative research interviewing (2nd ed.). Sage.

    74.  Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage.

    75.  Lipton, Z. C. (2018). The mythos of model interpretability. Queue, 16(3), 31–57.

    76.  Maynez, J., Narayan, S., Bohnet, B., & McDonald, R. (2020). On faithfulness and factuality in abstractive summarization. ACL, 1906–1919.

    77.  Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. Farrar, Straus and Giroux.

    78.  Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 220–229).

    79.  Ouyang, L., Wu, J., Jiang, X., et al. (2022). Training language models to follow instructions with human feedback. NeurIPS, 35, 27730–27744.

    80.  Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of FAccT 2020, 33–44.

    81.  Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.

    82.  Salganik, M. J. (2017). Bit by bit: Social research in the digital age. Princeton University Press.

    83.  Suresh, H., & Guttag, J. V. (2021). A framework for understanding sources of harm in machine learning. EAAMO ’21.

    84.  Wang, X., Wei, J., Schuurmans, D., et al. (2022). Self-consistency improves chain-of-thought reasoning in language models. arXiv:2203.11171.

    85.  Wei, J., Wang, X., Schuurmans, D., et al. (2022). Chain-of-thought prompting elicits reasoning in large language models. NeurIPS, 35, 24824–24837.

    86.  Bender, E. M., & Friedman, B. (2018). Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the ACL, 6, 587–604.

    87.  Blei, D. M. (2012). Probabilistic topic models. Communications of the ACM, 55(4), 77–84.

    88.  boyd, d., & Crawford, K. (2012). Critical questions for Big Data. Information, Communication & Society, 15(5), 662–679.

    89.  Chang, J., Gerrish, S., Wang, C., Boyd-Graber, J., & Blei, D. M. (2009). Reading tea leaves: How humans interpret topics in topic models. NeurIPS, 288–296.

    90.  Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46.

    91.  Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. NAACL-HLT, 4171–4186.

    92.  Grimmer, J., & Stewart, B. M. (2013). Text as data: The promise and pitfalls of automated content analysis. Political Analysis, 21(3), 267–297.

    93.  Habash, N. (2010). Introduction to Arabic natural language processing. Morgan & Claypool.
    Hovy, D., & Spruit, S. L. (2016). The social impact of natural language processing. ACL, 591–598.

    94.  Jurafsky, D., & Martin, J. H. (2023). Speech and language processing (3rd ed., draft).

    95.  Manning, C. D., & Schütze, H. (1999). Foundations of statistical natural language processing. MIT Press.

    96.  Maynez, J., Narayan, S., Bohnet, B., & McDonald, R. (2020). On faithfulness and factuality in abstractive summarization. ACL, 1906–1919.

    97.  Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv:1301.3781.

    98.  Niculescu-Mizil, A., & Caruana, R. (2005). Predicting good probabilities with supervised learning. ICML, 625–632.

    99.  Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1–2), 1–135.

    100.     Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., et al. (2020). Closing the AI accountability gap. FAccT 2020, 33–44.

    101.     Reyes, A., Rosso, P., & Veale, T. (2013). A multidimensional approach for detecting irony in Twitter. Language Resources and Evaluation, 47(1), 239–268.

    102.     Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information Processing & Management, 45(4), 427–437.