Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

on keeping all the reducing functions on the store module (Part II) #34

Open
gabrielmontagne opened this issue Jun 14, 2019 · 8 comments

Comments

@gabrielmontagne
Copy link
Contributor

@cristiano-belloni, man, (but spamming the friends, as well, @ollyhayes, @mamapitufo) still in the spirit of #16, and following on the conversations / complications that we were having today,

Streams that don't emit reducers but only their raw material are the most useful. We can compose them into other streams, keeping the tree of streams untangled.

But once we need to control flows, and need to orchestrate processes, we start to build up streams that emit other stuff... so we "homogenize and pausterize" by converting the raw material into reducing functions---I still feel weird calling them reducers.

Classic example: loaders: when we load data we like to show a preloader "Loading wonderful stuff... hang on", load the data, and then remove the preloader.
We normally emit a reducer for the loader, load the data, but instead of emitting it, we emit a reducer that will stow it on the state, then we emit another reducer to clear the loader.

This means we cannot really use the data raw as input to other streams.
Or we can, if we pluck it out, but then risk that the time the data was actually fetched, was not the time it was sandwhiched between preloaders.

So what if we could separate the streams: have a stream that just emits data, and a stream that just emits "preloaders", but have them synchronized through higher order observable magic?
A bit like what we've done with location: we can, and do, subscribe to the location$ stream, not only to drive the routes, but also analytics, etc., but we can still drive it from a composing stream that interweaves actions?

To go back to our loading example, what if instead of doing,

const initialData$ = concat(
  createLoading$('Loading initial data'),       // emits loader reducer
  getCusterDetail$.pipe(
    map(fancyTranforming),
    stow('userDetails')
  ),                                            // emits data reducer
  createClearLoading$('Loading initial data')   // emits loader reducer
)

... we could do instead,

const initialData$ = withLoader$(
  getCusterDetail$.pipe(map(fancyTranforming)),
  'Loading initial data'
) // emits the real data, but a preloader get magically emitted on a sibling
  // stream

It's shorter, but most importantly, I'd emit the fffing data, so it can be plugged into other stuff.
And the knowledge of where in the store using stow on the store$ module definition.

This can be implemented quite cleanly (at least when-do-I-start-and-when-do-I-end bit) using a simplish Observable.create construction. For example, and very roughly,

function withLoader$(source$, label) {
  return Observable.create(
    o => {
      emitLoaderStuff$.next(createLoading$(label))  // here we were subscribed, so the process starts
      source$.subscribe(o)
      return () => emitLoaderStuff$.next(createClearLoading$(label))   // after the source completes, this will be automatically unsubscribe from.
    }
  )
}

... and so on and so forth?
For loaders, we can nest them, and control all the stacking stuff on the new loader$ as well, simplifying the view quite a lot. No need to understand preloader stacks / lasagna on the toView function.

Because, man, remember what that other Great Italian said,

Life is a combination of magic and pasta.— Fellini

@cristiano-belloni
Copy link
Collaborator

cristiano-belloni commented Jun 15, 2019

That's good! Just some other details on our conversation yesterday:

  • There are pipelines that process data (taking it from somewhere - ie a server - and mapping, filtering, combining it) and pipelines that "create" information according to their own logic. Two examples of the latter are location (which is already handled specially) and loaders and maybe modals. These two kinds of pipeline are handled in the same streams at the same time and the second kind, which has no intermediate data, "ruins" the stream for us trying to reuse it.

  • One of the ways of trying to escape the previous situation is listening to a stream and next()ing to a Subject as a side effect, to kickstart a separate pipeline. We don't like it because it makes the code difficult to read: our "tree of life" is easy to read because streams listen to their antecedents and know nothing of their descendants, a mirrored version of what React does (know only about your children), where the imaginary mirror line lies in the store, which is the nexus between the business logic layer and the view layer. In general, we like tree-like structures and we dislike graph-like structure, because you can always read trees top-down, but when you read a graph you have to jump around.

  • It was proposed to have two "channels" in the streams, passing down the data and an eventual reducer at the same time. This way, streams that emit reducing functions also emit data and can be always listenable to. We decided to not do that because it makes everything clunky and it would need a lot of glue code to artificially sustain the paradigm.

  • We proposed, to solve the problem at hand (loaders), to treat them "specially", like we do with location: a separate "system" stream that emits show loader / clear loader reducing functions. The nature of loaders being a symmetrical stack, it's easy and functional to wrap a journey in a withLoader function that wraps a stream and emits on the "system" loader stream before on subscription (show loader) and after completion (clear loader)

  • It's still not clear if we want to support a way to create generic "system" streams in caballo-vivo, or if we just want to provide opinionated system streams (like loader$) whenever the need arises, or both.

@gabrielmontagne
Copy link
Contributor Author

The more I think about this, the more comfortable I feel with this approach.

When introducing new people to the team, we always insist on how the homogenous API of these observables is a big plus, and how once we "give a name" to a potentially complex (or, in this case, potentially "magical") construction, we forget about its innards and just use it as simply as we'd use a of('A') observable.

If anything, I think the higher order observable could be more idiomatically expressed as an operator; something that looks more like,

const initialData$ = ajax('https://jsonplaceholder.typicode.com/posts/1').pipe(withPreloader('Loading article eins'))

But I'd be curious to see how this withLoader / loader$ would look if it, on its own, managed the stacking, so that on the view we'd only get the current string, not the stack, size, etc. noise.

Exercise for the reader, etc. but could be useful and cool. Tho not necessarily in that order.

@gabrielmontagne
Copy link
Contributor Author

Same thing could be applied to flogs. They could signal or wrap the start of the observable life, perhaps even group on the consoles in a nicely %colored series. D3's categorical 20 or whatever , iykwim.

@cristiano-belloni
Copy link
Collaborator

cristiano-belloni commented Jun 17, 2019

Trying my hand at over-generalising this. Not sure it's a good idea, but if we want to apply it to flogs* and to modals:

function withWrapper$(source$, onBefore, onAfter) {
  return Observable.create(
    o => {
      onBefore()
      source$.subscribe(o)
      return onAfter
    }
  )
}

*We all know it's just an excuse to use the %c trick institutionally (btw, does it work on FF? EDIT: it seems a living standard, lol)

@cristiano-belloni
Copy link
Collaborator

Or even, pipable (I'm just sketching in comments, so it might not even be parse-able code):

function withWrapper$ (onBefore, onAfter) {
  return function (source$) {
    return Observable.create(
      o => {
        onBefore()
        source$.subscribe(o)
        return onAfter
      }
    )
  }
}

@cristiano-belloni
Copy link
Collaborator

... I mean, exporting the general method and the specialised methods we believe are useful in cv? So that anyone could create their own wrappers?

@gabrielmontagne
Copy link
Contributor Author

Why not :-) I like the pipeable version. It's almost short enough as not to be needed, but only almost, not quite. Why not have it in? And, yes, we can sketch the two or three things that can use it and expose them as well.

@cristiano-belloni
Copy link
Collaborator

cristiano-belloni commented Jun 17, 2019

Why not have it in?

Because I can't clone this repo on my workstation 🤣

(I / we can start a PR later or tomorrow)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants