2019.12.23-2019.12.29

12月24日(火)16:00~17:00  談話会                講義室
Dec 24 Tue    NAOJ Seminar     Lecture Room 

12月25日(水)10:30 -12:00  総研大コロキウム      講義室
Dec 25 Wed          SOKENDAI colloquium    Lecture Room 

詳細は以下をご覧下さい。

12月24日(火)

Campus
Mitaka
Seminar
NAOJ seminar
Regularly Scheduled/Sporadic
Regular
Date and time
December 24, 2019, 16:00 -17:00
Place
Lecture Room
Speaker
Wanda Diaz
Affiliation
Invited Scientist National Astronomical Observatory Japan
Title
Human Centred Astronomy
Abstract
Current analysis techniques for space physics 2D numerical data are based on scrutinising the data with the eyes. Space physics data sets acquired from the natural lab of the interstellar medium may contain events that may be masked by noise making them difficult to identify. This research explores the use of a perception technique as an adjunct to existing, space physics visualisation techniques to improve the analysis/exploration of space physics data. The technique proposed was shown to be effective for the analysis of space physics data sets.
One aim of this presentation is to investigate the effectiveness of the use of sound together with visual display to increase sensitivity to signal detection in presence of visual noise in the data as compared to visual display only. Radio, particle, and high energy data is explored using a sonification.
The sonification techniques applied to data, its application and results are numerically validated and presented. During this presentation we will present the results of three experiments. In all the experiments, the volunteers were using sound as an adjunct to data visualisation to identify changes in graphical visual and audio representations and these results are compared with those of using audio rendering only and visual rendering only. In the first experiment audio rendering did not result in significant benefits when used alone or with a visual display. With the second and third experiments, the audio as an adjunct to visual rendering became significant when a fourth cue was added to the spectra. The fourth cue con- sisted of a red line sweeping across the visual display at the rate the sound was played, to synchronise the audio and visual present. The results prove that a third congruent multimodal stimulus in synchrony with the sound helps space scientists identify events masked by noise in 2D data.

Facilitator
-Name:Narukage, Noriyuki

12月25日(水)

Campus
Mitaka
Seminar
SOKENDAI colloquium
Regularly Scheduled/Sporadic
Regular
Date and time
December 25th, 2019, 10:30-12:00
Place
Lecture Room

Speaker
Yui Kasagi
Affiliation
SOKENDAI 1st year (M1) (Supervisor: Takayuki Kotani, Saeko Hayashi, Wako Aoki)
Title
Spectroscopic analysis for dippers found from TESS full-frame images

Facilitator
-Name: Kei Ito

コメントを残す

メールアドレスが公開されることはありません。 が付いている欄は必須項目です