/Causal-Inf-Model

Developing a Causal Inference Model for Education with a Concept Map - in particular intended to find the root cause for a student's weaknesses in solving logic and proof-based questions

Primary LanguageHTML

Causal Inferencing in Education (for Proofs and Logic in Engineering)

Provided a well-modelled graph for a causal network in the learning process, our intention is to use this expert system and diagnose the cause for a student's failure in learning proofs based on observed diagnostic data. This graph can be seen in graph.html. The learning uses Bayesian approaches in weighting nodes of the graph. It is important to note that we DO NOT generate the causal graph in this example. Therefore the graph is given below. The weights of yellow nodes are found by the diagnostic test and the weight of the green nodes are computed using Bayesian inferencing. The edges are subject to an expert system defined in infer.py.

Instructions to Run the Software

Please ensure that the path to the /assessments folder is correctly set in the files: infer.py, toCSV.py, and Corev2.html according to your specifications. Also make sure that the images are routed to the correct path in Corev2.html. To run:

  • Execute: >> python server.py
  • Go To: localhost:5000 to find all the files on the (now local) server - enter Corev2.html
  • Answer the quiz as many times as users would like. The data is stored in the assessments/ dir.
  • A visual representation of the graph can be found in graph.html.
  • Then from the SRC folder, execute: >> python toCSV.py
  • Lastly, execute: >> python infer.py
  • To run the individual student progress tracker, run >> python infer_prog.py

The last 2 commands are scoring and infering tools that will solve for the deductions. NOTE: Please edit the number_of_entries variable in infer.py to match the number of users who took the test. A sample output is provided below for this code.

Likewise all comparative graphs for student progress are autogenerated into results/ where each student has a .pdf file that describes the student's progress within the subject of logic. NOTE: Design choices have resulted in metrics that may be > 1.0 or < 0.0. This only means that the differences in learning successive concepts is too high, and our causal model has failed to model the student's exact progress. We solve this by soft-maxing the probabilities to map them to 0 -> 1. In the future, we might want to "discover" confounding nodes that solve the model in real-time. This is most often not possible. In our predetermined graph, any time the user's data cannot be well represented with the diagnostic test results, we flag it as a confounding issue. It can be noted that confounding factors may exist outside this scope as well.

Over time the graph generates progress plots that look like the one shown below. By reverse engineering these progress charts, we can gain insight into modelling students' learning patterns. See the /results folder.

Future Work

  • Version 4.0 and up to contain more support for expanding the feature map.
  • Requires dynamic building of causal map based on the observational data alone to allow for confounding variables.
  • Expanding the angle of questioning to subjective questions to ensure the unbiased assessment of potential.
  • Get more precision in inference by expanding the graph but not at the cost of poorer diagnostic data.

For further clarification, contact: pholur@g.ucla.edu