Skip to content
Robert Carlsen edited this page Nov 21, 2015 · 2 revisions

Hack the Dinos 2015

Overview

1 day, cooperative event to provide a solution to one of several listed challenges using provided data sets.

https://github.com/amnh/HacktheDinos https://github.com/amnh/HacktheDinos/wiki

Challenge

Backyard explorer seems interesting: https://github.com/amnh/HacktheDinos/wiki/Backyard-Expedition-App

“The two primary things fossil experts look for are: (1) where the fossils are from (not necessarily where the photo was taken / geotagged); and (2) a well focused image. Having a scale in the photo is good, but is not a requirement.”

“Sample text for users: ‘Thanks for submitting your specimen! We’d be happy to have a look at your find. Please take very sharp photos (no more than 5 photos that are no larger than 500k each) that show scale (place a ruler or coin beside the specimen in the photo). Please note that we do not perform appraisals on specimens and cannot tell you the value of a fossil.’”

Solutions

From the wiki:

  • Build a mobile app that allows an individual to take a photo, geotag it, describe what they found and how they found it, easily enter other relevant data, and submit to the museum. (Don’t forget to tell users to include a ruler for scale)
  • Build a web app system that allows the museum to examine, curate, and feature submissions via a web page (“Backyard Find Of The Week”?) and add some text to explain why it is or isn’t a fossil.
  • Use sentiment analysis on language of a message from an individual to determine whether or not a fossil could be real?
  • Image recognition to compare actual fossils to non-fossils?
  • Use other factors to weight the possibility of an actual fossil: latitude / longitude (certain areas are more likely to have fossils), et cetera

Notes

Relatively straightforward to build a mobile app which uses the built-in camera/library. Also straightforward to create a form for desired metadata.

Focus detection is more difficult, but could be automated. tl;dr: naive implementation of a Laplacian convolution kernel using CIImageFilter:

   1
1 -4  1
   1

const double g = 1.;
const CGFloat weights[] = 
   { 0,  -1*g,  0,
    -1*g, 4*g, -1*g,
     0,  -1*g,  0};
CIImage *outputImage = [CIFilter filterWithName:@“CIConvolution3X3” keysAndValues:
      @“inputImage”, inputImage,
      @“inputWeights”, [CIVector vectorWithValues:weights count:9],
      @“inputBias”, @0,
      nil].outputImage;

Then return the maximum value (or near it).

http://stackoverflow.com/questions/7765810/is-there-a-way-to-detect-if-an-image-is-blurry

https://developer.apple.com/library/ios/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CIConvolution3X3

Destination

Where to “Submit” the data? The current system is an e-mail address. Sounds like they want some backend/CMS for submissions to populate so they can manage them via a web interface.

Libraries

Potentially useful libraries.

Surge

Maybe a bit older, but Surge could ease access to Accelerate framework for vector math: https://github.com/mattt/Surge

Could be nice to perform math (sum, mean) of the processed image data.

Interstellar

Nice, lightweight FRP library. Signals! https://github.com/JensRavens/Interstellar

Forms

Looking for a simple form generator to collect metadata.

Swift Forms

https://github.com/ortuman/SwiftForms

Eureka

This library seems to have a custom location / map element. https://github.com/xmartlabs/Eureka