“No Duh,” say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    If your test code is repetitive, you’re not following DRY sufficiently, or the code under test is overly complicated. We’ll generally have a single mock or setup code for several tests, some of which are parameterized. For example, in Python:

    @parameterized.expand([
      (key, value, Expected Exception,),
      (other_key, other_value, OtherExpectedException,), 
    ])
    def test_exceptions(self, key, value, exception_class):
        obj = setup()
        setattr(obj, key, value) 
    
        with self.assertRaises(exception_class): 
            func_to_test(obj)
    

    Mocks are similarly simple:

    @unittest.mock.patch.object(Class, "method", return_value=...)
    
    dynamic_mock =  MagicMock(Class)
    dynamic_mock...
    

    How this looks will vary in practice, but the idea is to design code such that usage is simple. If you’re writing complex mocks frequently, there’s probably room for a refactor.

    • FishFace@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      I know how to use parametrised tests, but thanks.

      Tests are still much more repetitive than application code. If you’re testing a wrapper around some API, each test may need you to mock a different underlying API call. (Mocking all of them at once would hide things). Each mock is different, so you can’t just extract it somewhere; but it is still repetitive.

      If you need three tests each of which require a (real or mock) user, a certain directory structure to be present somewhere, input data to be got from somewhere, that’s three things that, even if you streamline them, need to be done in each test. I have been involved in a project where we originally followed the principle of, “if you need a user object in more than one test, put it in setUp or in a shared fixture” and the result is rapid unwieldy shared setup between tests - and if ever you should want to change one of those tests, you’d better hope you only need to add to it, not to change what’s already there, otherwise you break all the other tests.

      For this reason, zealous application of DRY is not a good idea with tests, and so they are a bit repetitive. That is an acceptable trade-off, but also a place where an LLM can save you some time.

      If you’re writing complex mocks frequently, there’s probably room for a refactor.

      Ah, the end of all coding discussions, “if this is a problem for you, your code sucks.” I mean, you’re not wrong, because all code sucks.

      LLMs are like the junior dev. You have to review their output because they might have screwed up in some stupid way, but that doesn’t mean they’re not worth having.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 hours ago

        zealous application of DRY is not a good idea with tests

        I absolutely agree. My point is that if you need complex setup, there’s a good chance you can reuse it and replace only the data that’s relevant for your test instead of constructing it every time.

        But yes, there’s a limit here. We currently have a veritable mess because we populate the database with fixture data so we have enough data to not need setup logic for each test. Changing that fixture data causes a dozen tests to fail across suites. Since I started at this org, I’ve been pushing against that and introduced the repository pattern so we can easily mock db calls.

        IMO, reused logic/structures should be limited to one test suite. But even then, rules are meant to be broken, just make sure you justify it.

        also a place where an LLM can save you some time.

        I’m still not convinced that’s the case though. A typical mock takes a minute or two to write, most of the time is spent thinking about which cases to hit or refactoring code to make testing easier. Working with the LLM takes at least that long, esp if you count reviewing the generated code and whatnot.

        LLMs are like the junior dev

        Right, and I don’t want a junior dev writing my tests. Junior devs are there to be trained with the expectation that they’ll learn from mistakes. LLMs don’t learn, they’re perennially junior.

        That’s why I don’t use them for code gen and instead use them for research. Writing code is the easy part of my job, knowing what to write is what takes time, so I outsource as much of the latter as I can.