Document toolboxDocument toolbox

AI-Assisted Software Development

This wiki page is less of a “how to” and more of a “this is what I looked at, this is what I tried, and this is what I found”. This is mostly investigative exploratory work, and if this results in helpful tools and processes for other devs at Sage, that’s just a nice side bonus.

Executive Summary

New stuff that might be worth investigating

Worth investigating on a limited basis

Investigation blocked

Not worth further investigation

  • https://www.diffblue.com/products/ - ChatGPT does a better job. However, DiffBlue has the advantage in that it runs as an IDE plugin. Might be worth looking into again in 2024 or 2025 as technology improves.

  • - Appears to be a dead project.

  • - Codiga is being shut down and its future is undetermined.

Code Generation

OpenAI

OpenAI is best known for their product ChatGPT. However, they have models that can be used for code generation. See . Initial investigations into OpenAI were underwhelming. The better of the two models (code-davinci-002) was frequently under load and unavailable, so I did my investigations with the lesser of the two models (code-cushman-001, described as “almost as capable as code-davinci-002, but slightly faster”).

OpenAI is currently free, but the page implies that it will stop being free once it’s out of beta.

I used it to try to generate unit tests, and the code it generated was repetitive, incomplete, and wouldn’t even compile. For reference, the code under test was https://github.com/DwayneJengSage/BridgeServer2/blob/develop/src/main/java/org/sagebionetworks/bridge/services/UploadService.java#L522

This is the prompt I provided

// Generate unit tests for https://github.com/DwayneJengSage/BridgeServer2/blob/develop/src/main/java/org/sagebionetworks/bridge/services/UploadService.java#L522 @Test public void testGetUploadViewForExporter3() {

This is what OpenAI returned

@Test public void testGetUploadViewForExporter3() { UploadView view = uploadService.getUploadViewForExporter(EXPORTER_3_ID, UPLOAD_ID_3); assertEquals(UPLOAD_ID_3, view.getUploadId()); assertEquals(EXPORTER_3_ID, view.getExporterId()); assertEquals(EXPORTER_3_NAME, view.getExporterName()); assertEquals(EXPORTER_3_DESCRIPTION, view.getExporterDescription()); assertEquals(EXPORTER_3_VERSION, view.getExporterVersion()); assertEquals(EXPORTER_3_LICENSE, view.getExporterLicense()); assertEquals(EXPORTER_3_LICENSE_URL, view.getExporterLicenseUrl()); assertEquals(EXPORTER_3_LICENSE_LOGO_URL, view.getExporterLicenseLogoUrl()); assertEquals(EXPORTER_3_LICENSE_LOGO_URL, view.getExporterLicenseLogoUrl()); assertEquals(EXPORTER_3_LICENSE_LOGO_URL, view.getExporterLicenseLogoUrl()); // repeating the last line until we ran out of tokens

Instead of linking to our code, I attempted to copy the code into OpenAI as a prompt. However, the code in question requires 1500 tokens, leaving us only 500 tokens remaining. I gave the following prompt

// Unit test with mocks. @Test public void testGetUploadViewForExporter3() {

OpenAI returned the following

Conclusion: OpenAI isn’t ready for code generation, at least not as is.

GPT-4 / ChatGPT

GPT-4 is currently only available as ChatGPT through https://chat.openai.com/chat

First, I copy-pasted the method at . ChatGPT responded with

This method is a part of some application or service that is responsible for managing uploads of health data records, which are related to a study. The method fetches the view that includes both the upload and the record (if they exist) for a given upload ID, and optionally gets the timeline metadata and the adherence records, if they exist.

The method takes as input parameters the appId and uploadId, which are required, and the studyId, which is only required if we are fetching adherence. It also takes two boolean flags, fetchTimeline and fetchAdherence, which are used to determine whether to include the timeline metadata and the adherence records in the view, respectively.

The method first checks that the appId and uploadId are not null, and that the studyId is not blank if fetchAdherence is true. It then gets the upload and the record for the given uploadId, and checks that they are not null and that they belong to the same appId. If neither the upload nor the record exist, it throws an EntityNotFoundException.

The method then gets the healthCode from the upload or the record, and uses it to get the userId. It checks that the caller has permission to read the uploads for the given userId and studyId.

If fetchTimeline or fetchAdherence is true, the method gets the instanceGuid from the upload or the record metadata, and uses it to fetch the timeline metadata and/or the adherence records, if they exist. If the timeline metadata is associated with a different appId, the method logs an error and moves on. If the adherence records are associated with a different appId, it throws an EntityNotFoundException.

Finally, the method returns the UploadViewEx3 object that contains the upload, the record, the timeline metadata (if fetched), the adherence records (if fetched), the healthCode, and the userId.

This is an accurate summary of the code.

I then prompted ChatGPT: “Write unit tests for that method.” ChatGPT responded with

Note that the test is incomplete. I prompted ChatGPT with “Please continue.” ChatGPT wrote one more unit test

Then added some descriptive text

In this test, we simulate the case where a record exists but an upload does not. We create a HealthDataRecordEx3 object with the same ID as the upload ID and set its appId and healthCode properties. We then mock the UploadDao and HealthDataEx3Service to return null for the upload and the created record for the record. We also mock the AccountService to return the userId for the given appId and healthCode. We then invoke the method with fetchTimeline and fetchAdherence set to false, and assert that the returned UploadViewEx3 object has the expected properties. Finally, we verify that the appropriate methods were called on the mocked objects.

These two tests should cover the majority of the possible scenarios in the getUploadViewForExporter3 method. However, depending on the implementation details of the various services that the method depends on, there may be other edge cases that should be tested as well.

Note that attempts to get ChatGPT to regenerate testGetUploadViewForExporter3WithUploadOnly() caused ChatGPT to get confused and I was unable to get what ChatGPT originally intended (if it intended anything at all) with that test.

The tests that ChatGPT did write needed quite a bit of massaging. However, the tests do compile and pass and are readable and logical. However, the tests only cover 51% of line coverage and 38% of branch coverage (even worse than DiffBlue), and misses some important test cases.

I also tried the prompt: “Enumerate test cases for method getUploadViewForExporter3 in class UploadService at https://github.com/DwayneJengSage/BridgeServer2/blob/develop/src/main/java/org/sagebionetworks/bridge/services/UploadService.java”. ChatGPT replied with

Here are some test cases for the getUploadViewForExporter3 method in the UploadService class:

1. Test case where the studyId parameter is null. Expect an exception to be thrown.

2. Test case where the studyId parameter is an empty string. Expect an exception to be thrown.

3. Test case where the healthCode parameter is null. Expect an exception to be thrown.

4. Test case where the healthCode parameter is an empty string. Expect an exception to be thrown.

5. Test case where the synapseUserId parameter is null. Expect an exception to be thrown.

6. Test case where the synapseUserId parameter is an empty string. Expect an exception to be thrown.

7. Test case where the numFiles parameter is negative. Expect an exception to be thrown.

8. Test case where the numBytes parameter is negative. Expect an exception to be thrown.

9. Test case where the numRecords parameter is negative. Expect an exception to be thrown.

10. Test case where the uploadId parameter is null. Expect an exception to be thrown.

11. Test case where the uploadId parameter is an empty string. Expect an exception to be thrown.

12. Test case where the uploadId parameter is not a valid UUID. Expect an exception to be thrown.

13. Test case where the numFiles, numBytes, and numRecords parameters are all greater than 0, and the uploadId parameter is a valid UUID. Expect an UploadView object to be returned with all the fields populated appropriately.

* testGetUploadViewForExporter3WithUploadOnlyAndNoData: test the case where the healthCode parameter is not null, studyId is null, startDate and endDate are null, and the exporter3 parameter is UploadValidationStrictness.UPLOAD_ONLY. This test should ensure that no records are returned in the response.

* testGetUploadViewForExporter3WithStrictValidationAndNoData: test the case where the healthCode parameter is not null, studyId is null, startDate and endDate are null, and the exporter3 parameter is UploadValidationStrictness.STRICT_VALIDATION. This test should ensure that no records are returned in the response.

* testGetUploadViewForExporter3WithUploadOnlyAndRecords: test the case where the healthCode parameter is not null, studyId is null, startDateand endDate are null, and the exporter3 parameter is UploadValidationStrictness.UPLOAD_ONLY. This test should ensure that records are returned in the response.

* testGetUploadViewForExporter3WithStrictValidationAndRecords: test the case where the healthCode parameter is not null, studyId is null, startDate and endDate are null, and the exporter3 parameter is UploadValidationStrictness.STRICT_VALIDATION. This test should ensure that records are returned in the response.

* testGetUploadViewForExporter3WithStudyAndNoData: test the case where the healthCode parameter is null, studyId is not null, startDate and endDate are null, and the exporter3 parameter is UploadValidationStrictness.UPLOAD_ONLY. This test should ensure that no records are returned in the response.

* testGetUploadViewForExporter3WithStudyAndRecords: test the case where the healthCode parameter is null, studyId is not null, startDate and endDate are null, and the exporter3 parameter is UploadValidationStrictness.UPLOAD_ONLY. This test should ensure that records are returned in the response.

* testGetUploadViewForExporter3WithDatesAndNoData: test the case where the healthCode parameter is null, studyId is null, startDate and endDate are not null, and the exporter3 parameter is UploadValidationStrictness.UPLOAD_ONLY. This test should ensure that no records are returned in the response.

* testGetUploadViewForExporter3WithDatesAndRecords: test the case where the healthCode parameter is null, studyId is null, startDate and endDate are not null, and the exporter3 parameter is UploadValidationStrictness.UPLOAD_ONLY. This test should ensure that records are returned in the response.

Note that the output refers to parameters not present in the linked code, which makes this list of test cases not particularly useful.

Next, I tried the following prompt: “Summarize the pull request https://github.com/Sage-Bionetworks/BridgeServer2/pull/649 in a page of text, with the intended audience of other software developers.” ChatGPT responded with

This pull request adds functionality to the BridgeServer2 project that allows a user to upload large files in chunks to the server. The goal of this feature is to reduce the risk of uploading a large file in a single request, which could cause issues with network timeouts, file size limitations, or server resource constraints.

The changes made in this pull request include modifications to the UploadService class, as well as the addition of a ChunkedFileHelper class that handles the chunking and reassembly of files. The UploadService now has a new method called "uploadChunk" that accepts a chunk of a file, and a separate method called "completeChunkedFileUpload" that reassembles the chunks into a single file once all the chunks have been uploaded.

Several new unit tests were added to test this new functionality, including tests to ensure that files are correctly split into chunks, that the correct error messages are returned when necessary, and that the reassembly of chunks produces the expected result.

Overall, this pull request adds a valuable feature to BridgeServer2 that improves the reliability and robustness of file uploads, especially for larger files.

Which is completely wrong. Additional prompts asking for the title and description of the linked pull request confirm that ChatGPT is unable to follow links.

Conclusion:

  • Better at generating unit tests than DiffBlue. Might be good for generating tests for simple code, but fails at anything complex.

  • Good at describing a copy-pasted piece of code. Unable to follow links (to either code or pull requests) and describing that code, let alone enumerate or write unit tests. This limits ChatGPT from being able to work with large codebases.

GitHub Copilot

GitHub Copilot was similar to OpenAI in that both ended up writing code that was repetitive and didn’t compile and needed a lot of editing to get it to compile and run. The one advantage Copilot has over OpenAI is that Copilot can run as an IDE plugin.

In terms of cost, GitHub Copilot costs $10/mo, which is less than 0.2% of an entry-level Associate Engineer’s salary at Sage.

I tried Copilot specifically for unit test generation. For reference, the code under test was https://github.com/DwayneJengSage/BridgeServer2/blob/develop/src/main/java/org/sagebionetworks/bridge/services/UploadService.java#L522

I went to UploadServiceTest.java and gave the following prompt

Copilot filled this in with the following

This looks really nice, except that most of the setters on Upload, Record, Timeline, and Adherence don’t actually exist, nor do the getters on UploadView. Also, by the logic of the code under test, this wouldn’t actually exercise the code under test.

At Nick Grosenbacher’s suggestion, I attempted to use Copilot to enumerate test cases. I gave the following prompt

Copilot auto-completed it to

And then repeated it a bunch of times.

Okay, let’s trye

Copilot auto-completed it to the same thing

Okay, maybe my mistake was asking Copilot to enumerate test cases in a file separate from where my code is. Let’s try putting this prompt in a comment immediately after the method I’m trying to test

Copilot auto-completed it with

Okay, it’s nowhere near exhaustive. There are a lot of important cases that it’s missing, but it’s a good start.

Conclusion: Copilot is worth continuing to passively investigate while working on normal sprint work. It seems okay for enumerating test cases, but it’s incomplete, and I wouldn’t yet trust it to generate code for me.

Update: Copilot is good as a more sophisticated auto-complete that can auto-complete entire blocks of code for you, which cuts down on repeated copy-pasting.

DiffBlue Cover

See DiffBlue Cover is a tool to analyze code and generate unit tests. It exists as an IntelliJ plugin.

Some limitations of DiffBlue Cover

  • Unit tests generated by DiffBlue and human-written unit tests should be in separate files, or else the diffs get messy.

  • DiffBlue takes a hard dependency on JUnit 4 or 5. We currently use TestNG. This isn’t a problem, but it does require setup to have a project with mixed tests.

There’s a free tier, but the paid tier is expensive. $56k/yr.  For that price, we could get about 70% of an entry-level Associate Engineer at Sage.

I pointed DiffBlue at For reference, the code under test was https://github.com/DwayneJengSage/BridgeServer2/blob/develop/src/main/java/org/sagebionetworks/bridge/services/UploadService.java#L522 and told it to generate tests. DiffBlue created the following

DiffBlue at least wrote tests that not only compile, but also succeed! The test coverage is poor though, with only 55% line coverage and 42% branch coverage, and most of the lines it failed to cover were important cases that needed to be tested.

Also, the tests are almost unreadable. I wouldn’t accept this level of quality from a paid intern. I’m not even sure I would accept this level of quality for free.

Conclusion: ChatGPT does a better job. However, DiffBlue is more convenient to use as an IDE plugin. Might be worth investigating again in a year or two as technology improves.

Code Reviews

AI seems to have trouble generating code, but it might be able to help with code reviews.

Personally, I think AI-assisted code review shouldn’t replace human-led code review. But I think the two pieces of value are

  1. catching things that human reviewers might miss and

  2. catching things early so that developers can start fixing issues in their code instead of waiting for teammates’ availability.

I’m not sure how we could measure it, but I conjecture that if we point an AI at our code and it catches issues at all (other than inconsequential issues or stylistic nitpicks), then it’s providing value. Whether that value is worth what we’re being charged for it remains to be seen, but AWS CodeGuru charges $30/mo for each 100k lines of code, which is less than 0.5% of an entry-level Associate Engineer’s salary at Sage.

Leads include

Codeball

See

Codeball runs in your GitHub repository as a GitHub action. However, I ran into permissions issues setting up GitHub and was not able to resolve them on my own. It looks like Codeball support is done through Discord, and the Codeball devs aren’t very active there. As a result, I am currently blocked and unable to evaluate Codeball any further.

In addition to Discord, I also tried creating a GitHub issue for support

Update: Codeball appears to be a dead project. There hasn’t been any activity on their GitHub or their Discord for several months.

Codiga

See

Codiga runs as a GitHub app. Installing the Codiga app requires org-level permissions, even if you’re only installing for repositories that you have admin permissions on. This is currently waiting on IT to respond to the installation request.

See

IT-2753 - Getting issue details... STATUS
for tracking.

AWS CodeGuru

See https://aws.amazon.com/codeguru/

There’s an issue where CodeGuru tries to associate with my fork at instead of the root fork at , which means I’m unable to point CodeGuru at our pull requests.

I filed an AWS support request https://support.console.aws.amazon.com/support/home?region=us-east-1#/case/?displayId=12290789771&language=en