Skip to content
This repository has been archived by the owner on Jul 23, 2019. It is now read-only.

Help wanted: Benchmarks in xray_core #21

Open
nathansobo opened this issue Mar 6, 2018 · 9 comments
Open

Help wanted: Benchmarks in xray_core #21

nathansobo opened this issue Mar 6, 2018 · 9 comments

Comments

@nathansobo
Copy link

It would be great to get a PR setting up benchmarks on xray_core in the canonical Rust manner. You can use your imagination on the kinds of operations we want to test, but here are some ideas:

  • Making large numbers of edits
  • Moving large numbers of cursors in a document containing large numbers of edits

Just getting a basic framework in place with some non-trivial benchmark would be helpful.

@max-sixty
Copy link
Contributor

If anyone wants to get started on this, this is a config for running bench on nightly without preventing compiles on stable: max-sixty@bc950a8

With commands

cd xray_core/
cargo +nightly bench --features "dev"

...or reply here if there are other ways

@cmyr
Copy link

cmyr commented Mar 10, 2018

Another option for micro benchmarks that is stable/nightly friendly is criterion: https://github.com/japaric/criterion.rs.

@ysimonson
Copy link

You can put the benchmarks in a benches directory in the crate root. They'll still run when you call cargo bench, but then you don't have to put them behind a feature flag.

@pranaygp
Copy link

pranaygp commented Jun 6, 2018

Can this be closed now that #62 is merged?

@nathansobo
Copy link
Author

That just laid the framework. I think we could probably use more benchmarks though of core functionality.

@anderoonies
Copy link

anderoonies commented Aug 19, 2018

i'm taking a look at this but had a question about how atomic the benchmarks should be.
right now benchmarks are done for individual functions in one scenario. should the same functions be tested in multiple scenarios—e.g. selecting to the end of a line without any edits, selecting to the end of a line that has multiple edits, etc. how should benchmarks for the same functions—but with different setups—be organized?

thanks!

@nathansobo
Copy link
Author

@anderoonies I honestly haven't developed strong opinions on this. My initial thought would be that we'd want to focus on situations involving more complexity. If we optimize those, presumably we'd do well in simpler cases, and conversely, if we're fast on a line without edits and slow on a line with lots of edits it still seems like we'd be too slow. That said, I think each scenario might be different. I have a lot of experience optimizing, but not much experience designing a long-lived benchmark suite. Would be happy to hear perspectives on the best design considerations.

@anderoonies
Copy link

i'm curious to hear others' experience and input as well, being new to writing benchmarks myself.
the existing benchmarks @rleungx added establish a pattern of testing individual functions of the editor API under single, pretty "intense" scenarios. i'm happy to extend that to benchmark the rest of the core API.
@nathansobo, as someone very familiar with the underlying implementations, are there any behaviors of the editor you feel should be focused in benchmarking? in the rgasplit paper, briot et al use "randomly generated traces" for performance evaluations, but i'm not sure the consensus on randomness in benchmarking

@nathansobo
Copy link
Author

I'm not sure random edits are as important as sequential edits that simulate what a human would do. But testing against documents containing lots of edits will likely be important.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants