android - Camera2 API - How to set long exposure times -


i'm trying capture images 30 seconds exposure times in app (i know it's possible since stock camera allows it).

but sensor_info_exposure_time_range (which it's supposed in nanoseconds) gives me range :

13272 - 869661901 

in seconds just

0.000013272 - 0.869661901 

which less second.

how can use longer exposure times?

thanks in advance!.

the answer question:

you can't. checked right information , interpreted correctly. value set exposure time longer clipped max amount.

the answer want:

you can still want, though, faking it. want 30 continuous seconds' worth of photons falling on sensor, can't get. can (virtually) indistinguishable accumulating 30 seconds' worth of photons tiny missing intervals interspersed.

at high level, need create list of capturerequests , pass cameracapturesession.captureburst(...). take shots minimal interstitial time possible. when each frame of image data available, pass new buffer somewhere , accumulate information (simple point-wise addition). done allocation output surface , renderscript.

notes on data format:

  • the right way use raw_sensor output format if can. way accumulated output directly proportional light incident sensor on whole 30s.

  • if can't use that, reason, recommend using yuv_420_888 output, , make sure set tone map curve linear (unfortunately have manually creating curve 2 points). otherwise non-linearity introduced ruin our scheme. (although i'm not sure simple addition right in linear yuv space, it's first approach @ least.) whether use approach or raw_sensor, you'll want apply own gamma curve/tone map after accumulation make "look right."

  • for love of pete don't use jpeg output, many reasons, not least of add lot of interstitial time between exposures, thereby ruining our approximation of 30s on continuous exposure.

note on exposure equivalence:

this produce exposure want, not quite. differs in 2 ways.

  1. there small missing periods of photon information in middle of chunk of exposure time. on time scale talking (30s), missing few milliseconds of light here , there trivial.

  2. the image nosier if had taken true single exposure of 30s. because each time read out pixel values actual sensor, little electronic noise gets added information. in end you'll have 35 times of additive noise (from 35 exposures specific problem) single exposure would. there's no way around this, sorry, might not noticeable- small relative meaningful photographic signal. depends on camera sensor quality (and iso, imagine application need high.)

  3. (bonus!) exposure superior in 1 way: areas might have been saturated (pure white) in 30s exposure still retain definition in these far shorter exposures, you're guaranteed not lose high end details. :-)


Comments

Popular posts from this blog

java - WARN : org.springframework.web.servlet.PageNotFound - No mapping found for HTTP request with URI [/board/] in DispatcherServlet with name 'appServlet' -

html - Outlook 2010 Anchor (url/address/link) -

android - How to create dynamically Fragment pager adapter -