sampsyo/cs6120

Project Proposal: Gator OOP

Closed this issue · 5 comments

What will you do?
I will add OOP to Gator. Specifically, I will add classes with single inheritance, subtype and parametric polymorphism, and public/private/protected keywords. I will emit GLSL code, with C# and Typescript as bonus targets if I have time.

How will you do it?
For GLSL compilation, objects can be implemented as GLSL structs, and methods can be implemented as functions which consume the structs. Dietrich (and I) expect parsing to be a challenge, which is unfortunate since 6120 is so anti-parsing. After that, I will likely need to add an additional typechecking context to maintain. For the polymorphism, I plan to implement C++ style v-tables for subtype polymorphism, and C++ style templates for compile time parametric polymorphism.

How will you empirically measure success?
By adding the above features without introducing a regression.

Team members:
Just me

Also, I will make sure the as! type coercion works with subclasses, and I will implement method overriding

Compiling methods to functions that take a self argument will work for single-dispatch invocations but not for method overloading. For example, if you do o.m() and o is a variable of type C, but there also exists a class D that extends C with an override of m, then your compiler will not be able to pick the right function to invoke based on the run-time type of o. You said you'll implement vtables later on, but I don't see how that will work at the GLSL level… I don't think GLSL supports function pointers? Maybe it would make sense to keep the scope focused on single-dispatch (i.e., no virtual method calls).

Can you elaborate on why parsing will be a challenge? Can you invent syntax that will be easier to parse? There's no need to self-punish by picking a concrete syntax that makes parsing difficult.

One important thing: you do need to expand your evaluation plan. Part of 6120's philosophy includes rigorous empirical evaluations to measure success, rather than just relying on intuition. "I implemented it and nothing seemed to break, as far as I can tell" is not a very high bar to clear. Can you think of something useful and empirical that you can measure, preferably quantitatively?

You're right about the virtual method calls. I think I'll narrow my scope like you suggested.
For parsing, there's no specific reason it should be a challenge other than that Dietrich and I have run into a lot of shift and reduce conflicts in the past. Like you suggested, it shouldn't be an issue if I resort to exotic syntax at the first sign of trouble.
For the evaluation, perhaps I can rewrite the shaders in the examples directory using the OOP syntax, and make sure compilation times, execution times, and total lines of source code don't significantly deteriorate.

All sounds good!

Closed in #256.