For decades computer scientists have been studying AI in different capacities, from simple chained if statements to generative large language models AI has been an area of interest. In more recent years generative tools have begun to emerge whose sophistication is beginning to rival many aspects of human life. There are two major places where this is visible, in writing with tools like grammarly and more sophisticated autocomplete functions, and in software engineering and programming with tools like ChatGPT, Bard, and Github Copilot becoming exceedingly popular. These tools all work in very similar ways, taking information from across the internet and producing the answers to questions that are presented. In the two fields mentioned this works exceedingly well due to the generally well defined rules in both areas that are able to be understood by an AI. This use of AI has become a fear for many including teachers who are now facing students armed with the powers of a generative chatbot they can access in a device the size of their palms. AI plays an interesting role in the modern day education system, on the obvious side it is the easiest way to cheat at almost any assignment imaginable, especially if the assignment involves extensive writing. On the other hand it also can provide students with an incredibly helpful tool to review and learn on their own time. Rather than having to wait for the teachers office hours or to ask a question in class or try and email just to pray for a response you can now enter your question into ChatGPT or such and get a detailed answer and explanation that you can easily ask followup questions to as well, something just googling the question doesn’t provide. In the specific situation we find ourselves in with a software engineering class, these two options are exemplified, you can either just copy and paste the assignment into ChatGPT or use github copilot and get a fully functioning app in a matter of minutes especially if you understand the material well and can frame your followup questions correctly. Or you can have a tool to make the mundane part of programming much faster, generating test data for an app can be done in an instant, writing function headers or switch statements can now be completed with a push of the button. Alternatively, learning a new language has never been easier with the ability to write code into the chatbot in a language you know and have it rewrite it into a language you are trying to learn. This last example has been my main usage of AI in this class, to understand the syntaxing and structure of NextJs and typescript and all the other tools we have been using in this class. I primarily have used ChatGPT for this task based purely off of convenience and the ability to ask specific questions unlike copilot which just automatically recommends suggestions.
I have somewhat already touched on this idea but the way in which I have been using AI in this course is with the sole intent to boost my learning as much as possible. From my perspective I think it has been a successful endeavor although I don’t exactly have a control group to compare it to so it’s hard to make any bold claims. I can say for certain that AI has helped my understanding of multiple concepts in this course much better than the documentation or videos did. It was especially useful in understanding the architecture of Next.js applications which proved useful in the final project. The interesting thing that I don’t think AI natively helps with is problem solving abilities, to preface this, this course also doesn’t really involve a large amount of problem solving, you are using tools and prebuilt frameworks and adapting them to your needs however there can be times when you need to problem solve especially when those tools don’t behave in the way you expect them to. AI is very good at reciprocating information, and LLM’s and generative AI’s can get very close to problem solving if they have experienced that problem before and they have seen data on how to fix it. However, pure problem solving is something that is incredibly hard to model and recreate. There were often times where I was working on a problem and the AI suggested solution was much more complicated than the solution that ended up working simply because I can’t think, it can get extremely close and can mimic it very well but at our current stage of computing AI can’t come up with entirely new ideas (there is an argument that neither can humans but this is not a psychology paper). What I often found most useful when dealing with problem solving situations was to do the problem solving myself and then use AI to understand how I could bring my solution to life in React or Next or whatever I was using. Often times in educational settings you can get away with using AI for problem solving because the problems we are facing are not new, there are established solutions and a large part of computer science is building on established solutions, that’s the whole point of data structures, algorithms, and design patterns, they are all just ways that we have come up with to solve common problems. So AI can seem quite useful when every problem you come across has been solved before, however if you continuously rely on AI to do your problem solving then you will be in a whole heap of trouble when you come across an entirely new problem or if you are working with custom made tools for a company you work for. It is the largest threat to software engineering and computer science education as it renders students less equipped to solve problems out in the field.
AI has begun to revolutionize the tech field for the good and bad, it is commonly used to speed up workflows for repetitive parts of code. Additionally it has become the hot new thing to study and invent with, there are hundreds of startups now with the whole premise being “blank but with AI” and it is especially popular in the media. In software engineering we have seen studies on how AI is generally pretty effective at solving software engineering problems and it is now starting to be integrated into various parts of the software development pipeline. However, as mentioned AI can have limitations, it only can work as well as the data we can provide it so brand new problems can become potential roadblocks to AI not specifically trained for the task.
I think there are many ways in which AI can and most likely will be integrated into education and software engineering will probably be one of the first few classes to start integrating it. We will likely start to see the development of AI based IDEs and code editors with built in functions acting similar to the way copilot works right now. In a similar vein we will start seeing AI integrated into terminals and other parts of basic computer functions which could potentially have an impact on how future students learn to write code. There is a very real possibility that teaching software engineering in the future is about teaching the concepts and then about writing successful prompts to generate the code you want. It is a similar progression to how we now use tools like React and Next.js instead of writing our own API and server client handlers in plain javascript and how before javascript we would write everything in simpler c-based languages. From what I have read AI is the next step in innovation in a number of fields but software development and software engineering are clear favorites to spearhead the changes. This also ties into a larger discussion already being had on how we teach computer science, programming, and software engineering. Weather students should be taught the skills required to get a job, which would involve a class such as this, something very hands on “gaining experience with industry standard tools” or if the point of school is to be more conceptual and that students should be taught on a broad level about computer science concepts and then use that knowledge. This discussion gets brought up again when you talk about AI where in the future teaching students about how to use AI to its fullest is just another tool that students should be taught along with git and Prisma for example or if the more important lesson one should get from a computer science degree is about the theoretical parts of how to structure code or how to utilize design patterns. At the current stage of AI I am not sure what the best path to go down is but it is an important distinction that many colleges and degree programs will have to come to terms with somewhere down the line.
The benefits and hindrances on a student’s learning that AI effect rely almost entirely upon the students usage of AI. For example a student who completely copies the assignment into an AI tool and then copies the code into their editor and the student who goes and asks questions to AI about their learning will have vastly different results on say an in person written exam. That is not to discredit the former student, they are using the tools to their availability to the utmost degree however it does beg consideration upon what a student gets out of taking a course when “participating” in such a manner. In all honesty I am not sure whether we will be able to have an evidence based discussion on this matter with the current knowledge of AI on student learning. The general consensus among educators is that AI is a complete hindrance to student learning because it undermines the traditional model of education that we have always relied on. This is not false in its merit however it fails in its entirety when you consider the total lack of studies and evidence with AI and student learning. For the time being the claim that AI doesnt help students learn can exist without scrutiny because there is no evidence against it at this time due to there being no evidence at all. There is also no evidence saying that AI does hinder students. All we know is that the current teaching models established in most fields are being easily toppled by students utilizing AI tools. The impacts of this upon their education can be suggested and all signs do point to this being a hindrance however we have no true way of knowing what this will mean for students in the future. The other fact of the matter is that AI will be available to students both now when they are learning and when they are out in the field so perhaps becoming reliant upon AI is a feasible option for students if they so choose. Personally I would like to not be reliant and continue to learn in the traditional sense but I also enjoy the process of learning so perhaps my opinion is not entirely unbiased. In short if things are to progress as we expect them to then AI will lead to a decrease in students learning in the long run however there are numerous edge cases where this future doesn’t exist and AI becomes as essential as a calculator after a certain grade level.
I have mentioned my thoughts on the future of AI in classrooms in the previous paragraphs so I will summarize here. AI has the potential to and will likely take over how we teach and learn software engineering. It has already begun to become an essential part of software development in many real world applications and has also begun to infiltrate education as well. Whether this effect will be damaging to future software engineers is yet to be seen and hard to predict.
I believe educators have two options when it comes to the future of teaching software engineering, either lean in and explain how to use these tools to best enhance learning or to shift to a more abstract approach and utilize the hands on parts as a supplemental aspect of learning similar to a lab class in physics or chemistry. Like it or not AI will be a part of the field of software engineering and what role it plays is up to the entire field in a sense. There is also the possibility that as consensus software engineers decide that AI is unethical and it becomes part of an ethics checklist that students will have to learn about although I do not think that future is likely. The more common option is that we will live in a world in tandem with AI tools, in the classroom and in the field.