Document toolboxDocument toolbox

Development Methodology

TODO update this for Open Source developers PLFM-194

Notes on best practices.  This is a work in progress and suggested guidelines, not a rulebook to follow like slaves.

Testing

If it isn't tested, it doesn't work.  Period.  All new features require appropriate unit tests and integration tests as part of the development process.  Passing tests should be checked in the same time as new code, and are fully in the scope of code reviews.

We use JUnit as a testing framework and Mockito for testing, mocking, stubbing etc. At the time of writing we use both JUnit5 and JUnit4 for backward compatibility and we are trying to move into the direction of JUnit5 for all new tests. In order to incrementally update the code base without having disrupting PRs we adopt the following simple rules:

  • New Tests: JUnit5 API and new Mockito (>=2) style with strict stubbing without the lenient setting/api.
  • Old Tests: If you happen to touch some existing tests please try and upgrade from JUnit4 to JUnit5. Since some tests contains a lot of code and might take a long time to upgrade we allow to enable lenient tests if those fail, but keep in mind that strict stubbing is the default and we should try to follow it (Before enabling lenient stubbing have a run to the test with the defaults to see if passes).

To upgrade a test from JUnit4 to JUnit5 in general the following steps might be necessary:

  • Replace any org.junit.* reference with the new org.junit.jupiter.api.* package
  • Replace @Before and @After annotations with the @BeforeEach and @AfterEach
  • Replace @BeforeClass and @AfterClass annotations with the @BeforeAll and @AfterAll
  • Remove any @Test(expected=SomeException.class) and use:

    org.junit.jupiter.api.Assertions.assertThrows(SomeException.class, ()-> { /* Your code here */ });
  • Replace the org.junit.runner.RunWith annotation with the org.junit.jupiter.api.extension.ExtendWith, for example:

    @ExtendWith(SpringExtension.class)
    public class SomeIntegrationTest {}
    
    @ExtendWith(MockitoExtension.class)
    public class SomeUnitTest {}
  • If the tests fails with UnnecessaryStubbingException you might need to review your test code (E.g. Check that unnecessary stubbing is not performed in a @BeforeEach, for example stubbing that is not used in all the tests). If it's not possible to fix an old test without spending too much time use the org.mockito.Mockito.lenient() or enable the lenient setting at the class level:

    @ExtendWith(MockitoExtension.class)
    @MockitoSettings(strictness = Strictness.LENIENT)
    public class SomeLenientTest {}
  • For mocks where the mocked object is NOT STATEFUL and the return value of the method is NOT specific to ANY PARAMETERS, use Mockitio.lenient().when() to make that specific mock lenient. Do this only in the @BeforeEach setup() method.

    	@BeforeEach
    	public void setup(){
    		lenient().when(mockServiceProvider.getTableServices()).thenReturn(mockTableService);
    		lenient().when(mockServiceProvider.getWikiService()).thenReturn(mockWikiService);
    		lenient().when(mockServiceProvider.getEntityService()).thenReturn(mockEntityService);
    		lenient().when(mockServiceProvider.getDoiServiceV2()).thenReturn(mockDoiServiceV2);
    		lenient().when(mockServiceProvider.getDiscussionService()).thenReturn(mockDiscussionService);
    		lenient().when(mockServiceProvider.getDataAccessService()).thenReturn(mockDataAccessService);
    	}

Code Reviews

Let's do code reviews for all code changes.

  • When possible, let's try to keep them small -> more review requests, less files to review per request.
  • Don't spend too much time on each review -> the main purpose of our code reviews is communication (not finding bugs).
  • Assign each code review to one person in particular and expect a response from them. CC as many other people as you like (to keep folks in the loop) but those folks do not need to respond.

Regarding rigor, I don't expect the reviewer to scrutinize the code (the burden is on me to test it) but instead to:

  1. get the gist of what I'm trying to do or let me know if its confusing (because I haven't done a decent job if the code change is confusing)
  2. let me know whether the approach I've taken seems reasonable or if there is a better way to do it
  3. give me a clue if the way I am testing it is missing major important cases
  4. point out any dependencies of which I might be unaware (e.g., if I change the REST API in a particular way, who might that change impactand is that okay?)

Meetings

Driving considerations

  • Cost of a meeting is #min * #num people… adds up fast
  • But… more efficient than N*(N-1) individual meetings
  • Goal: minimize meeting time, but give us set times when we know we can address full team
  • 2nd Goal: Give us check-in periods smaller than a sprint to help us plan realistically

Meeting Schedule 

  • Stand-up status meetings
    • M, W 11:45 --In hallway / my office if phone needed
    • Three questions:
      • What have you done since last meeting?
      • What are you going to do before next meeting?
      • What are you stuck / blocked on?
    • Identify issues and mechanism to solve them, DO NOT solve issues (unless can be done in ~3 min).
    • I will time the meeting… am serious about the 15 min.
  • Friday demo meeting
    • 4:30, 30 mins
    • Same general format, but a bit more time to actually demo progress
    • Will time meeting, serious about sending you home for the weekend by 5
  • Design / Code review meetings
    • Engineers continue to call as needed
    • Try to stack up (M,W?) to leave some long blocks of coding time
  • Sprint Review meeting 
    • Last Friday of the month sprint
    • Recap at Joey’s: What worked, what didn’t