Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial spike on Autocorrect #985

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

devonestes
Copy link
Contributor

@devonestes devonestes commented Jul 27, 2022

This PR is for #944. It's really just a spike at this point, but it works for the single check I've implemented autocorrect for! At the moment the UI is probably confusing as we still show issues that have been autocorrected, but this general idea I think should work.

Remaining work here is:

  • Improve UI to remove any issue that has been autocorrected from the list of issues printed at the end of the run
  • Add documentation
  • Implement autocorrect callbacks for more checks
  • Try and break this to see how it might fail in real-world use cases

At this point the pipeline for the default "suggest" task has been
modified to include an Autocorrect step. That's all hooked up, and a
callback for `autocorrect/1` has been added to the `Credo.Check`
behaviour and the default implementation. This can be implemented for
each check individually, and it's a noop for now.
At this point the task in the execution pipeline for running the
autocorrect is implemented & tested. It's naive, but it should work in
basic cases!
At this point we have autocorrect functionality implemented & working
for the TrailingBlankLine check. I've verified this with manual testing.
The UX still has a ton of work to do, but for now the bulk of the work
is actually working.
We can now pass an `--autocorrect` flag to the default `suggest` command
when running `mix credo` and it works!
Now when we autocorrect an issue, we remove that issue from the stored
list of issues so it isn't shown to the user (since technically it's no
longer an issue since it was autocorrected).

One other thing we might do is to change the formatter to display when
an issue was there and to indicate that it was autocorrected, also
making sure the exit code is 0 for any autocorrected issues, but for now
this seemed like a reasonable path forward and much simpler than this
option.
At this point it will autocorrect the most basic version of alias
ordering, but it doesn't yet handle multi-aliases or handle groupings of
aliases. That will come in a later commit.
More basics of the autocorrect behavior for the alias ordering check
have been implemented, and now we just need to handle blocks of aliases
with whitespace between them.
We've now handled pretty much all the use cases for ordering aliases
automatically. There is some formatter whackiness that will need to be
figured out, but beyond that this is a good example of how an
autocorrect callback might be implemented for a more involved check than
the first one.
@rrrene
Copy link
Owner

rrrene commented Jul 28, 2022

This is a great start to discuss some of the necessary core concepts in more detail (and using an actual implementation, which is much appreciated)!

Thanks for that! I will nitpick on this in a later reply ^^

@NickNeck
Copy link
Contributor

Hi, I am happy to see this in credo.

I think the use of Code has one problem. The code will be changed and comments gets lost.

"""
defmodule Foo do
  # my foo
  def foo, do: :foo
end
"""
|> Code.string_to_quoted!() 
|> Macro.to_string()
|> Code.format_string!() 
|> to_string()
|> IO.puts()
defmodule Foo do
  def foo do
    :foo
  end
end

Or is there a solution in the code that I don't see. I have not run the code yet.
A solution to the problem would be sourceror.

I have also done some experiments on autocorrect, see recode. Maybe that could be a little inspiration.

@devonestes
Copy link
Contributor Author

@NickNeck Totally correct - this is just a spike and sort of "proof of concept" for how I was thinking about tackling this to get some feedback and maybe identify how this might work on a high level. I needed a couple basic implementations of an autocorrect callback for a couple checks just to see it all work and to make it easier to find ways to break this current idea for an implementation. If this general idea does seem like it might work then of course there would need to be a lot more scrutiny around how we do this autocorrect and a ton more testing and improvements.

But if the general idea for how this would work seems ok to everyone, then we can get the "plumbing" in place that will support the feature as a whole, then we can spend a lot of time working on each individual callback implementation of autocorrect/1 for each check separately to make sure they're all done really well. And because the default implementation is a noop, we can take our time with those implementations, but (I think) the important part of this PR is to get an idea of if this plan for the broader feature will work or if there's something that we haven't yet considered that would invalidate this approach.

This is the first implementation of an autocorrect callback for a
consistency check. I don't _love_ how we're doing it right now since
we're not actually storing the state of what the "correct" behavior is
that should be changed to, so instead we're just using the issue message
to give us information about what the autocorrect behavior should be.
It's not great, but it works!
This autocorrects an unless with else check so that we're changing it to
an `if` statement.
This adds the first bits of an implementation for how we might want to
autocorrect an unused String operation. Unfortunately because there's
nothing in the AST that represents **nothing**, anything we replace that
unused String operation with in the AST will be compiled into something
- even if just the literal atom `nil`. That would probably be better
than unused String operations since those would just be noops in the
runtime, but they would be potentially confusing in the code.

If we want to actually do this there's probably a _lot_ more that would
need to be done for this check, but what we have offers a glimpse into
how this kind of check could be autocorrected as a proof of concept.
@devonestes
Copy link
Contributor Author

Ok, at this point I think this PR is "done" as a proof of concept. I've implemented basics of autocorrect (although not comprehensive for sure!) for one of each category of Check to show that we can autocorrect each of them with this structure.

I'd say the goals and sort of "plan" for this would be these steps:

  1. Merge a PR that includes the basics for running autocorrect (basically this PR minus the callback implementations) but that doesn't expose anything to users.
  2. Implement a single autocorrect callback and at this point really consider the UI/UX that would be best for users. This first check would likely be Credo.Check.Readability.TrailingBlankLine since that is the easiest to implement by a wide margin.
  3. Once we have the basics for running autocorrect merged and a single check implemented with a really good UI/UX, then all that remains is to implement autocorrect for each other check individually. Those can all be worked on in parallel and shipped as minor patches when they're ready. Some of these will be quick, but some of them will be really difficult, and there can be a ton of edge cases with many of them. However, I bet that since this is a highly requested feature of many folks, they'd see a lot of people tackle them since they're so self contained - you'd just touch two files and do some string/ast manipulation and don't need to know how Credo works at all to contribute.

So, I'd love to hear what folks think about this current implementation and if there is anything that we strictly cannot do with this, or anything that we think might break this in such a way that it would be unusable or would create issues for users. If there isn't anything that can be found like that and folks agree this is a good path forward then we can try and move our way through the steps outlined above.

@rrrene
Copy link
Owner

rrrene commented Jul 28, 2022

As mentioned, this is great. Please take this critique with a grain of salt and as a compliment for putting this together.

I would love for Credo 1.7 to provide a general concept and architecture for implementing --autofix capabilities!

To the incoherent, nitpicky part (sorry, need to sleep):

This does not (yet) adress any of the areas of concern layed out in the discussion of the corresponding issue:

  • Tranformer API (the problems you are describing in some of your commit messages and comments are why I think this is an area of interest)
  • Elixir compatibility (I think the red CI highlights the importance of this area)
  • Extensibility
  • Autofix Race Conditions
  • Issue Invalidation

Although the last two areas are kind of gracefully ignored by fixing all issues at once.

  • The current implementation sidesteps source file management by reading and writing all files directly, which bypasses Credo's file management and probably breaks the retry mechanism of --watch.

  • The current implementation removes all issues if there is an autofixer function, regardless of whether the issues were fixed or not.

  • This fixes all found issues for each check at once, but reduces the source code iterating over all issues
    (running the fixer function for each issue of a check seems unneeded in this implementation).

  • Related: this does not (yet) account for new issues that are resulting from fixing other issues (see the AliasOrder test case, which as of this writing is producing a new instance of the very issue it's trying to fix, I scrambled a codebase running this branch @ 0d84490 ^^)

  • It seems odd that the analysis run_ functions receive the check's Params, but the autofix function does not know how the check is configured.

Lastly, I would really prefer this to be called autofix (as I alluded to in the original issue).

While I am not a native speaker, "correcting issues" to me sounds like something is objectively wrong with my code and Credo will "correct" it - and "correcting the issues" does not have the same optimistic ring to it as "fixing the issue". ✌️

@devonestes
Copy link
Contributor Author

  • Tranformer API (the problems you are describing in some of your commit messages and comments are why I think this is an area of interest)

Can you elaborate what you mean by "Transformer API"? I wanted to spike out a couple of the autocorrect implementations just as a proof of concept that the autocorrect/2 callback would be sufficient for solving this problem. Is there some other API that you're envisioning exposing to users here?

  • Elixir compatibility (I think the red CI highlights the importance of this area)

The compatibility issues here are only relevant to the implementations of the autocorrect/2 callbacks themselves. While it would be awesome to support autocorrect for all checks and for all versions of Elixir, I don't think we'll be able to reliably do that going back to 1.8 or something like that. I think having something like this in the autocorrect implementations would be fine:

def autocorrect(file, issue) do
  if Version.compare(System.version(), "1.13.0") == :lt do
    file
  else
    do_autocorrect(file, issue)
  end
end

I don't think that if we're not able to autocorrect for all versions of Elixir that we shouldn't autocorrect for any versions. As long as this support is documented I think it's a step in the right direction, and if we do find some way to expand support for this in the future it can always be added later!

Unfortunately, there will just be some that cannot be autocorrected, and some that can only be autocorrected given certain Elixir versions. I think that as long as this is documented and exposed to users in a clear way, that this would still offer a lot of benefit to the folks on Elixir versions that can use it, so it's still worth it even if not all users can use some parts of this feature.

  • Extensibility

This is totally correct - the current PR doesn't address extensibility at all. This one I might actually tack on so we can see how that would work, but I think the idea is that we'd allow folks to register their own callbacks for a check in their .credo.exs file, sort of like this:

pipe_chain_start = fn file, issue ->
  # User configured autocorrect behavior here!
end

%{
  configs: [
    %{
      checks: [
        {Credo.Check.Refactor.PipeChainStart, [autocorrect: pipe_chain_start]}
      ]
    }
  ]
}

I might take a swing at spiking this out, too, since it shouldn't be too tricky to add, but it would look something like this:

# lib/credo/cli/task/run_autocorrect.ex
  defp run_autocorrect(issue, file, exec) do
    autocorrect_fun =
      if user_fun = user_configured_autocorrect(issue, exec) do
        user_fun
      else
        &issue.check.autocorrect/2
      end
      
    case autocorrect_fun.(file, issue) do
      ^file ->
        file

      corrected_file ->
        Execution.remove_issue(exec, issue)
        corrected_file
    end
  end

We should have access to the user configuration from the Execution struct, and so we should be able to determine from there if the user has configured a custom autocorrect/2 callback and use that if configured.

  • Autofix Race Conditions
  • Issue Invalidation

As you mentioned, my plan here is to avoid race conditions by just not running them in parallel. I don't think it's reasonably possible to do autocorrect like this in parallel, and since we'll only run autocorrect at most once for each issue, doing this in series shouldn't be too slow for users. Even if the user had two process each running autocorrect at the same time, because the autocorrect callbacks should be idempotent, it should still generate the same files on disk at the end of both runs.

The one race condition that might still exist here is if a user attempts to write to a file while autocorrect is running (for example, maybe through an editor integration which autosaves & runs credo on save, and those happen in parallel). However, that's a rare enough situation that I don't think it should invalidate the feature. Honestly, I don't think it's unreasonable to not support very specific situations like that. Sure, it might make users lives easier a bit, but it would be a ton of work, and I wouldn't want to hold up the majority of the value to users (just running mix credo --autocorrect manually from the command line before committing) because the more difficult use case isn't yet supported.

About issue invalidation - given that these should only fix issues if they exist (usually by pattern matching on AST nodes), if an issue has been autocorrected previously by a different check somehow, then the given autocorrect for the now invalidated check should just be a noop and that's not a problem.

  • The current implementation sidesteps source file management by reading and writing all files directly, which bypasses Credo's file management and probably breaks the retry mechanism of --watch.

I didn't consider this at all, to be honest, since I haven't used this feature, but I think it's ok to not support this. What would you think about just documenting that these are incompatible and raising an exception when parsing command line arguments if both of these are given?

  • The current implementation removes all issues if there is an autofixer function, regardless of whether the issues were fixed or not.

At the moment the issue is only removed if the return value from the autocorrect callback (which is the file after applying any changes) has changed from the value passed into the autocorrect callback. If no changes were made, no issue is removed from the issues list. In my testing I saw that this only removed issues that had been fixed and not those which hadn't been fixed, but I may have missed something there.

That said, I'm not even entirely sure yet if this is how we want the UI/UX to be, or if we want to annotate the issue in some way to make it clear to the user that "There was an issue here, but it has been autocorrected for you."

  • This fixes all found issues for each check at once, but reduces the source code iterating over all issues
    (running the fixer function for each issue of a check seems unneeded in this implementation).

This is totally correct - we could optimize to have the implementations only run once for each check and to make sure that each check fixes all occurences of the issue in a given file. Alternatively, we could make sure that the given fix only applies to the issue passed in to the autocorrect/2 function. I think which direction we go with this isn't yet super important, to be honest since it's just a performance optimization. Each of these autocorrect functions should be idempotent, so if we run them any number of times on a given file we should not see any changes after the initial run.

  • Related: this does not (yet) account for new issues that are resulting from fixing other issues (see the AliasOrder test case, which as of this writing is producing a new instance of the very issue it's trying to fix, I scrambled a codebase running this branch @ 0d84490 ^^)|

This is correct - at the moment the actual implementations of the autocorrect callbacks are very naive and not good. They're really just there to test out the more general "runner" for this stuff. Once that's all in place and we feel that the "runner" code is ok, then we can devote the time and energy necessary to making the actual implementations of each autocorrect callback really solid. This will of course be the hardest part of all this work, but I see that as a separate problem to what the goal of the PR was, which was to just get the "runner" and other plumbing in place so we can eventually get to these other problems of the callback implementations themselves.

  • It seems odd that the analysis run_ functions receive the check's Params, but the autofix function does not know how the check is configured.

Good point - this is likely something that we'll need to expand. I have a feeling we might end up passing the Execution struct all the way through to the autocorrect callbacks themselves so all the necessary information for things like configuration are there. Can you think of a specific situation where autocorrect might do the wrong thing because it's not given the params for the check so we can use that as a test case?

Lastly, I would really prefer this to be called autofix (as I alluded to in the original issue).
While I am not a native speaker, "correcting issues" to me sounds like something is objectively wrong with my code and Credo will "correct" it - and "correcting the issues" does not have the same optimistic ring to it as "fixing the issue". ✌️

Totally fine by me! I'll do a quick sed and push this back up.

@rrrene
Copy link
Owner

rrrene commented Aug 3, 2022

I think we are not (yet) getting anywhere in this conversation. :-(

I've written three different answers over the course of a week and none of them seem to fit the irritation I feel regarding your reply.

It seems to me that you haven't really read and/or comprehended the things I was pointing out here and in your original issue.

Instead of talking about the broad strokes we have to make and which guarantees we have to give, which kind of test harness we would need and what would help autofix authors deliver upwards compatibility, you are arguing whether or not an autofix function really needs to know the user-configurated params for the check in question.

For the sake of this PR, let's boil this down to a single question:

If this is just a "spike", why isn't this a plugin? Would be ~50 lines of code.

edit: Here's the plugin. It's 53 lines, including white-space for readability.

Other plugins could use CredoAutofixPlugin.register_autofix_fun/2 to register autofix functions for checks and once we have enough decent, reliable autofixers, the authors can then PR that stuff into Credo.

So, why would we need any of this merged into Credo right now?

defmodule CredoAutofixPlugin do
  import Credo.Plugin
  alias Credo.CLI.Command.Suggest.SuggestCommand

  def init(exec) do
    exec
    |> register_cli_switch(:autofix, :boolean)
    |> prepend_task(SuggestCommand, :print_after_analysis, __MODULE__.RunAutofix)
  end

  def register_autofix_fun(exec, check_mod, autofix_fun), do:
    Credo.Execution.put_assign(exec, ["credo_autofix_plugin.autofix_funs", check_mod], autofix_fun)

  defmodule RunAutofix do
    use Credo.Execution.Task

    def call(exec, _opts, read_fun \\ &File.read!/1, write_fun \\ &File.write!/2) do
      if Execution.get_given_cli_switch(exec, :autofix) do
        exec
        |> Execution.get_issues_grouped_by_filename()
        |> Enum.each(fn {file_path, issues} ->
          file_content = read_fun.(file_path)
          autofixed_content = Enum.reduce(issues, file_content, &run_autofix(&1, &2, exec))
          write_fun.(file_path, autofixed_content)
        end)
      end

      exec
    end

    defp run_autofix(issue, file, exec) do
      autofix_fun = Execution.get_assign(exec, ["credo_autofix_plugin.autofix_funs", issue.check])

      if autofix_fun do
        case autofix_fun.(file, issue) do
          ^file ->
            file

          fixed_file ->
            issues =
              exec
              |> Execution.get_issues()
              |> List.delete(issue)

            Execution.put_issues(exec, issues)
            fixed_file
        end
      else
        file
      end
    end
  end
end

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants