Skip to content

Commit

Permalink
Merge pull request #1112 from JuliaAI/dev
Browse files Browse the repository at this point in the history
Force documentation updates. No new release.
  • Loading branch information
ablaom authored May 6, 2024
2 parents 9bdb4ed + ec5af95 commit 1a1d10f
Show file tree
Hide file tree
Showing 26 changed files with 928 additions and 821 deletions.
2 changes: 1 addition & 1 deletion CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ below. The Committee will respond to reports on a case-by-case basis.
following forums:

- the GitHub repository
[MLJ.jl](https://github.com/alan-turing-institute/MLJ.jl) (including
[MLJ.jl](https://github.com/JuliaAI/MLJ.jl) (including
all issues, discussions, and pull requests)

- all GitHub repositories in the [JuliaAI](https://github.com/JuliaAI)
Expand Down
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ channel](https://julialang.org/slack/), #mlj.
- [Code organization](ORGANIZATION.md)

- Issues: Currently issues are split between [MLJ
issues](https://github.com/alan-turing-institute/MLJ.jl/issues) and
issues](https://github.com/JuliaAI/MLJ.jl/issues) and
issues in all other repositories, collected in [this GitHub
Project](https://github.com/orgs/JuliaAI/projects/1).

Expand Down Expand Up @@ -43,7 +43,7 @@ an internal state reflecting the outcomes of applying `fit!` and
A generalization of machine, called a *nodal* machine, is a key
element of *learning networks* which combine several models together,
and form the basis for specifying new composite model types. See
[here](https://alan-turing-institute.github.io/MLJ.jl/dev/composing_models/)
[here](https://JuliaAI.github.io/MLJ.jl/dev/composing_models/)
for more on these.

MLJ code is now spread over [multiple repositories](ORGANIZATION.md).
Expand Down
6 changes: 3 additions & 3 deletions ORGANIZATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ connections do not currently exist but are planned/proposed.*
Repositories of some possible interest outside of MLJ, or beyond
its conventional use, are marked with a ⟂ symbol:

* [MLJ.jl](https://github.com/alan-turing-institute/MLJ.jl) is the
* [MLJ.jl](https://github.com/JuliaAI/MLJ.jl) is the
general user's point-of-entry for choosing, loading, composing,
evaluating and tuning machine learning models. It pulls in most code
from other repositories described below. MLJ also hosts the [MLJ
Expand Down Expand Up @@ -100,9 +100,9 @@ its conventional use, are marked with a ⟂ symbol:
models and measures (metrics).

* (⟂)
[DataScienceTutorials](https://github.com/alan-turing-institute/DataScienceTutorials.jl)
[DataScienceTutorials](https://github.com/JuliaAI/DataScienceTutorials.jl)
collects tutorials on how to use MLJ, which are deployed
[here](https://alan-turing-institute.github.io/DataScienceTutorials.jl/)
[here](https://JuliaAI.github.io/DataScienceTutorials.jl/)

* [MLJTestIntegration](https://github.com/JuliaAI/MLJTestIntegration.jl)
provides tests for implementations of the MLJ model interface, and
Expand Down
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@

<h2 align="center">A Machine Learning Framework for Julia
<p align="center">
<a href="https://github.com/alan-turing-institute/MLJ.jl/actions">
<img src="https://github.com/alan-turing-institute/MLJ.jl/workflows/CI/badge.svg"
<a href="https://github.com/JuliaAI/MLJ.jl/actions">
<img src="https://github.com/JuliaAI/MLJ.jl/workflows/CI/badge.svg"
alt="Build Status">
</a>
<a href="https://alan-turing-institute.github.io/MLJ.jl/dev/">
<a href="https://JuliaAI.github.io/MLJ.jl/dev/">
<img src="https://img.shields.io/badge/docs-stable-blue.svg"
alt="Documentation">
</a>
Expand All @@ -28,13 +28,13 @@
MLJ (Machine Learning in Julia) is a toolbox written in Julia
providing a common interface and meta-algorithms for selecting,
tuning, evaluating, composing and comparing about [200 machine learning
models](https://alan-turing-institute.github.io/MLJ.jl/dev/model_browser/#Model-Browser)
models](https://JuliaAI.github.io/MLJ.jl/dev/model_browser/#Model-Browser)
written in Julia and other languages.

**New to MLJ?** Start [here](https://alan-turing-institute.github.io/MLJ.jl/dev/).
**New to MLJ?** Start [here](https://JuliaAI.github.io/MLJ.jl/dev/).

**Integrating an existing machine learning model into the MLJ
framework?** Start [here](https://alan-turing-institute.github.io/MLJ.jl/dev/quick_start_guide_to_adding_models/).
framework?** Start [here](https://JuliaAI.github.io/MLJ.jl/dev/quick_start_guide_to_adding_models/).

**Wanting to contribute?** Start [here](CONTRIBUTING.md).

Expand Down
46 changes: 23 additions & 23 deletions ROADMAP.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ GH Project](https://github.com/orgs/JuliaAI/projects/1).
- [ ] **Integrate deep learning** using [Flux.jl](https://github.com/FluxML/Flux.jl.git) deep learning. [Done](https://github.com/FluxML/MLJFlux.jl) but can
improve the experience by:

- [x] finishing iterative model wrapper [#139](https://github.com/alan-turing-institute/MLJ.jl/issues/139)
- [x] finishing iterative model wrapper [#139](https://github.com/JuliaAI/MLJ.jl/issues/139)

- [ ] improving performance by implementing data front-end after (see [MLJBase
#501](https://github.com/JuliaAI/MLJBase.jl/pull/501)) but see also [this relevant discussion](https://github.com/FluxML/MLJFlux.jl/issues/97).
Expand All @@ -55,7 +55,7 @@ GH Project](https://github.com/orgs/JuliaAI/projects/1).
[Turing.jl](https://github.com/TuringLang/Turing.jl),
[Gen](https://github.com/probcomp/Gen),
[Soss.jl](https://github.com/cscherrer/Soss.jl.git)
[#157](https://github.com/alan-turing-institute/MLJ.jl/issues/157)
[#157](https://github.com/JuliaAI/MLJ.jl/issues/157)
[discourse
thread](https://discourse.julialang.org/t/ppl-connection-to-mlj-jl/28736)
[done](https://github.com/tlienart/SossMLJ.jl) but experimental and
Expand All @@ -65,21 +65,21 @@ GH Project](https://github.com/orgs/JuliaAI/projects/1).
"distributions" that can only be sampled.

- [ ] Feature engineering (python featuretools?, recursive feature
elimination?)
[#426](https://github.com/alan-turing-institute/MLJ.jl/issues/426) [MLJModels #314](https://github.com/JuliaAI/MLJModels.jl/issues/314)
elimination ✓ done in FeatureSelection.jl :)
[#426](https://github.com/JuliaAI/MLJ.jl/issues/426) [MLJModels #314](https://github.com/JuliaAI/MLJModels.jl/issues/314)


### Enhancing core functionality

- [x] Iterative model control [#139](https://github.com/alan-turing-institute/MLJ.jl/issues/139). [Done](https://github.com/JuliaAI/MLJIteration.jl)
- [x] Iterative model control [#139](https://github.com/JuliaAI/MLJ.jl/issues/139). [Done](https://github.com/JuliaAI/MLJIteration.jl)

- [ ] **** Add more tuning
strategies. See [here](https://github.com/JuliaAI/MLJTuning.jl#what-is-provided-here)
for complete
wish-list. Particular focus on:

- [x] random search
([#37](https://github.com/alan-turing-institute/MLJ.jl/issues/37))
([#37](https://github.com/JuliaAI/MLJ.jl/issues/37))
(done)

- [x] Latin hypercube
Expand All @@ -88,37 +88,37 @@ GH Project](https://github.com/orgs/JuliaAI/projects/1).
- [ ] Bayesian methods, starting with Gaussian Process methods a
la PyMC3. Some preliminary research done.

- [ ] POC for AD-powered gradient descent [#74](https://github.com/alan-turing-institute/MLJ.jl/issues/74)
- [ ] POC for AD-powered gradient descent [#74](https://github.com/JuliaAI/MLJ.jl/issues/74)

- [ ] Tuning with adaptive resource allocation, as in
Hyperband. This might be implemented elegantly with the help of
the recent `IterativeModel` wrapper, which applies, in
particular to `TunedModel` instances [see
here](https://alan-turing-institute.github.io/MLJ.jl/dev/controlling_iterative_models/#Using-training-losses,-and-controlling-model-tuning).
here](https://JuliaAI.github.io/MLJ.jl/dev/controlling_iterative_models/#Using-training-losses,-and-controlling-model-tuning).

- [ ] Genetic algorithms
[#38](https://github.com/alan-turing-institute/MLJ.jl/issues/38)
[#38](https://github.com/JuliaAI/MLJ.jl/issues/38)

- [ ] Particle Swarm Optimization (current WIP, GSoC project @lhnguyen-vn)

- [ ] tuning strategies for non-Cartesian spaces of models [MLJTuning
#18](https://github.com/JuliaAI/MLJTuning.jl/issues/18), architecture search, and other AutoML workflows

- [ ] Systematic benchmarking, probably modeled on
[MLaut](https://arxiv.org/abs/1901.03678) [#69](https://github.com/alan-turing-institute/MLJ.jl/issues/74)
[MLaut](https://arxiv.org/abs/1901.03678) [#69](https://github.com/JuliaAI/MLJ.jl/issues/74)

- [ ] Give `EnsembleModel` a more extendible API and extend beyond bagging
(boosting, etc) and migrate to a separate repository?
[#363](https://github.com/alan-turing-institute/MLJ.jl/issues/363)
[#363](https://github.com/JuliaAI/MLJ.jl/issues/363)

- [ ] **** Enhance complex model composition:

- [x] Introduce a canned
stacking model wrapper ([POC](https://alan-turing-institute.github.io/DataScienceTutorials.jl/getting-started/stacking/)). WIP @olivierlabayle
stacking model wrapper ([POC](https://JuliaAI.github.io/DataScienceTutorials.jl/getting-started/stacking/)). WIP @olivierlabayle

- [ ] Get rid of macros for creating pipelines and possibly
implement target transforms as wrappers ([MLJBase
#594](https://github.com/alan-turing-institute/MLJ.jl/issues/594))
#594](https://github.com/JuliaAI/MLJ.jl/issues/594))
WIP @CameronBieganek and @ablaom


Expand All @@ -137,7 +137,7 @@ GH Project](https://github.com/orgs/JuliaAI/projects/1).
[this
proposal](https://julialang.org/jsoc/gsoc/MLJ/#interpretable_machine_learning_in_julia)

- [ ] Spin-off a stand-alone measures (loss functions) package
- [x] Spin-off a stand-alone measures (loss functions) package
(currently
[here](https://github.com/JuliaAI/MLJBase.jl/tree/master/src/measures)). Introduce
measures for multi-targets [MLJBase
Expand All @@ -147,11 +147,11 @@ GH Project](https://github.com/orgs/JuliaAI/projects/1).
could use [NaiveBayes.jl](https://github.com/dfdx/NaiveBayes.jl)
as a POC (currently wrapped only for dense input) but the API
needs to be finalized first
{#731](https://github.com/alan-turing-institute/MLJ.jl/issues/731). Probably
{#731](https://github.com/JuliaAI/MLJ.jl/issues/731). Probably
need a new SparseTables.jl package.

- [x] POC for implementation of time series models classification
[#303](https://github.com/alan-turing-institute/MLJ.jl/issues/303),
[#303](https://github.com/JuliaAI/MLJ.jl/issues/303),
[ScientificTypesBase #14](https://github.com/JuliaAI/ScientificTypesBase.jl/issues/14) POC is [here](https://github.com/JuliaAI/TimeSeriesClassification.jl)

- [ ] POC for time series forecasting, along lines of sktime; probably needs [MLJBase
Expand All @@ -162,16 +162,16 @@ GH Project](https://github.com/orgs/JuliaAI/projects/1).
- [ ] Add tools or a separate repository for visualization in MLJ.

- [x] Extend visualization of tuning plots beyond two-parameters
[#85](https://github.com/alan-turing-institute/MLJ.jl/issues/85)
[#85](https://github.com/JuliaAI/MLJ.jl/issues/85)
(closed).
[#416](https://github.com/alan-turing-institute/MLJ.jl/issues/416)
[#416](https://github.com/JuliaAI/MLJ.jl/issues/416)
[Done](https://github.com/JuliaAI/MLJTuning.jl/pull/121) but might be worth adding alternatives suggested in issue.

- [ ] visualizing decision boundaries? [#342](https://github.com/alan-turing-institute/MLJ.jl/issues/342)
- [ ] visualizing decision boundaries? [#342](https://github.com/JuliaAI/MLJ.jl/issues/342)

- [ ] provide visualizations that MLR3 provides via [mlr3viz](https://github.com/mlr-org/mlr3viz)

- [ ] Extend API to accommodate outlier detection, as provided by [OutlierDetection.jl](https://github.com/davnn/OutlierDetection.jl) [#780](https://github.com/alan-turing-institute/MLJ.jl/issues/780) WIP @davn and @ablaom
- [ ] Extend API to accommodate outlier detection, as provided by [OutlierDetection.jl](https://github.com/davnn/OutlierDetection.jl) [#780](https://github.com/JuliaAI/MLJ.jl/issues/780) WIP @davn and @ablaom

- [ ] Add more pre-processing tools:

Expand All @@ -194,14 +194,14 @@ GH Project](https://github.com/orgs/JuliaAI/projects/1).
is merged.

- [ ] Online learning support and distributed data
[#60](https://github.com/alan-turing-institute/MLJ.jl/issues/60)
[#60](https://github.com/JuliaAI/MLJ.jl/issues/60)

- [ ] DAG scheduling for learning network training
[#72](https://github.com/alan-turing-institute/MLJ.jl/issues/72)
[#72](https://github.com/JuliaAI/MLJ.jl/issues/72)
(multithreading first?)

- [ ] Automated estimates of cpu/memory requirements
[#71](https://github.com/alan-turing-institute/MLJ.jl/issues/71)
[#71](https://github.com/JuliaAI/MLJ.jl/issues/71)

- [x] Add multithreading to tuning [MLJTuning
#15](https://github.com/JuliaAI/MLJTuning.jl/issues/15)
Expand Down
2 changes: 1 addition & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ makedocs(
@info "`makedocs` has finished running. "

deploydocs(
repo = "github.com/alan-turing-institute/MLJ.jl",
repo = "github.com/JuliaAI/MLJ.jl",
devbranch="master",
push_preview=false,
)
Expand Down
10 changes: 5 additions & 5 deletions docs/src/about_mlj.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ MLJ is released under the MIT license.

A self-contained notebook and julia script of this demonstration is
also available
[here](https://github.com/alan-turing-institute/MLJ.jl/tree/dev/examples/lightning_tour).
[here](https://github.com/JuliaAI/MLJ.jl/tree/dev/examples/lightning_tour).

The first code snippet below creates a new Julia environment
`MLJ_tour` and installs just those packages needed for the tour. See
Expand Down Expand Up @@ -210,13 +210,13 @@ and to report issues.

For a query to have maximum exposure to maintainers and users, start a
discussion thread at [Julia Discourse Machine
Learning](https://github.com/alan-turing-institute/MLJ.jl) and tag
Learning](https://github.com/JuliaAI/MLJ.jl) and tag
your issue "mlj". Queries can also be posted as
[issues](https://github.com/alan-turing-institute/MLJ.jl/issues), or
[issues](https://github.com/JuliaAI/MLJ.jl/issues), or
on the `#mlj` slack workspace in the Julia Slack channel.

Bugs, suggestions, and feature requests can be posted
[here](https://github.com/alan-turing-institute/MLJ.jl/issues).
[here](https://github.com/JuliaAI/MLJ.jl/issues).

Users are also welcome to join the `#mlj` Julia slack channel to ask
questions and make suggestions.
Expand Down Expand Up @@ -269,7 +269,7 @@ packages such as DecisionTree.jl, ScikitLearn.jl or XGBoost.jl.
MLJ is supported by several satellite packages (MLJTuning,
MLJModelInterface, etc) which the general user is *not* required to
install directly. Developers can learn more about these
[here](https://github.com/alan-turing-institute/MLJ.jl/blob/master/ORGANIZATION.md).
[here](https://github.com/JuliaAI/MLJ.jl/blob/master/ORGANIZATION.md).

See also the alternative installation instructions for [Modifying Behavior](@ref).

Expand Down
2 changes: 1 addition & 1 deletion docs/src/benchmarking.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@

This feature not yet available.

[CONTRIBUTE.md](https://github.com/alan-turing-institute/MLJ.jl/blob/master/CONTRIBUTE.md)
[CONTRIBUTE.md](https://github.com/JuliaAI/MLJ.jl/blob/master/CONTRIBUTE.md)
Loading

0 comments on commit 1a1d10f

Please sign in to comment.