Monday, February 18, 2013

3D audio technologies 2013

State of the art in 3D audio
technologies 2013

In this chapter we present a brief overview of the state of the art in 3D
surround sound. The technologies reviewed here span from complete frame-
works that account for the whole chain from capture to playback, such
as Ambisonics and Wavefield Synthesis, to extensions of existing 2D ap-
proaches, like amplitude panning, to a brief mention of hybrid systems and
solutions that have been recently introduced to the market.
This chapter is not meant to be a complete and detailed description of the
technologies, but just to introduce their most relevant aspects and give the
reader a basic knowledge of the subject, providing a context for the topics
that are mentioned in the rest of the thesis. References to key research
papers and books are provided in each section.

Binaural audio
Binaural audio is perhaps the most straightforward way of dealing with
three-dimensional audio. Since we perceive three-dimensional sound with
our two ears, all the relevant information is contained in two signals; indeed,
our perception is the result of interpreting the pressure that we receive at
the two ear drums, so recording these signals and playing them back at the
ears should suffice for recreating life-like aural experiences.
Our perception of the direction of sound is based on specific cues, mostly
related to signal differences or similarities between the ears, that our brain
interprets and decodes. In the end of the nineteenth century, Lord Rayleigh
identified two mechanisms for the localization of sound: time cues (which
are also interpreted as phase differences) are used to determine the direc-
tion of arrival at frequencies below 700 Hz, while intensity cues (related to
signal energy) are dominant above 1.5 kHz. In the low
frequency region of the audible spectrum, the wavelength of sound is large
compared to the size of the head, therefore sound travels almost unaffected
and reaches both ears regardless of the direction of arrival. Besides, unless
a sound source is located very close to one ear, the small distance between
ears does not cause any significant attenuation of sound pressure due to the
decay with distance.

The basic concept behind 3D binaural audio is that if one measures the
acoustic pressure produced by a sound field in the position of the ears of
a listener, and then reproduces exactly the same signal directly at the ears
of the listener, the original information will be reconstructed. Binaural au-
dio is perfectly linked with our perception, because it takes implicitly into
account the physical mechanisms that take part in our hearing. Binaural
recordings are implemented by means of manikin heads with shaped pinnae
and ear canals, with two pressure microphones inserted at the end of the
ear canal, thus collecting the signals that a human would perceive. Exper-
iments have been done with miniature microphones inserted into the ear
canals of a subject, to obtain recordings that are perfectly tailored to a per-
son’s shape of the outer ear . Binaural playback requires
using headphones to deliver each ear the corresponding recorded signal, and
the technique delivers good spatial impression. It is worth mentioning that
while listening to conventional mono or stereo material through headphones
conveys a soundstage located within the head, the use of binaural technique
accurately reproduces sounds outside the head, a property which is called
“externalization”.
Physically, the signals that reach the ear drums when a sound source
emits a sound from a certain position can be expressed as the convolution
between the sound emitted by the source and the transfer function between
the position of the source and each ear (neglecting effects of the room).
The head related transfer functions (HRTF) depend on the position of the
source, the distance from the listener and the peculiar shape of the outer
ear that is used during recording. Various HRTF databases are available
which offer the impulse response recordings done with the source sampling
a sphere at a fixed distance (far field approximations are used and distance
is usually neglected). With such functions, binaural material can also be
synthesized by convolution: once a source and its position are chosen, the
left and right binaural signals are obtained by convolving the source with
the left and right HRTF corresponding to the position of the source. In this
way, realistic virtual reality scenarios can be reproduced. In the real time
playback of synthetic sound fields, the adoption of head tracking to detect
the orientation of the listener and adapt the sound scene accordingly has
been proven invaluable for solving the localization uncertainty related to the
circles of confusion or the front-back ambiguity.

Saturday, February 16, 2013

In the past year 2013, the evolution of technology has leapt unexpectedly


The technology boom of the new generation results in terms of
"Cell phone and Internet" (Smartphone, Tablet, iPhone, etc.) which, rather than being separate realities complement.
However, the development of these types of technology comes a point
wherein converge, and when the network is a global communications
opens and exceeds the expectations of its creators, Internet no longer
exclusively for the military and government, and combined with the services
telephony becomes a social interaction media currently
is present in all areas of daily life.
Today, these technologies are combined in a single, the cellular and
not limited to the function of two people communicate with each other, but now
have evolved to include modalities such as Internet access in almost
all its aspects (data, mp3, teleconference, transmission
photo files and videos, etc..).
This brings countless advantages, accelerate the pace of obtaining
information, facilitates communication, reduces emissions and response times;
ie transforms everyday life into an event technology,
all this tied to economic growth of societies, and beyond
all changes in the natural order of things that the technology generates.
Having seen so many wide and constant changes
that mobile telephony and the Internet have been issued on the global community,
arises in my interest in further informatin on the issues that shape this revolution in our own environment.
To meet the above, in this paper we proposed the development of a software application with mobile computing platform, providing access to information located on a platform database on a Web server, through mobile devices such as cell phones.
The application provides for the registration and monitoring of proprietary information
a pharmaceutical entity, ie the relevant information of the client,
purchasing products and prescription medicines.
This means customers the ability to self manage their comprasen anytime and anywhere without having to physically attend the pharmacy branches and with only the help of a modern cell phone.
Proposed turn developing a web application platform make it accessible on the Intranet, by providing additional features like be: user registration, stock control and drug products counter.
It also proposed the development of a Web site accessible from the Internet,
that appears to be the site of a pharmacy, which contemplates functions
e-commerce such as: customer registration, sale of products and / or drugs
on-line, allowing a customer to make your purchase so virtual.

Differences OLTP vs Data Warehouse

Differences OLTP vs Data Warehouse
Traditional systems of transactions and data warehousing applications are polar opposites in terms of their design requirements and operating characteristics.
OLTP applications are organized to execute transactions for which they were made, for example, move money between accounts, a charge or credit, a return of inventory, etc.. Furthermore, a data warehouse is organized based on concepts such as: customer invoice, products, etc.
Another difference lies in the number of users. Normally, the number of users of a data warehouse is less than one OLTP. It is common to find that transactional systems are accessed by hundreds of users simultaneously, while only tens Data Warehouse. OLTP systems perform hundreds of transactions per second while a single query of a Data Warehouse can take minutes. Another factor is that transactional systems frequently are smaller in size to the data warehouses, this is because a data warehouse information can consist of several OLTP's.
There are also differences in the design, while an extremely normalized OLTP, of a Data Warehouse tends to be denormalized. The OLTP typically consists of a large number of tables, each with few columns, while in a data warehouse is the lower number of tables, but each of them tends to be greater in number of columns.
The OLTP is continuously updated by operational systems every day, while the Data Warehouse are updated periodically batch.
OLTP structures are very stable, rarely change, while those of Data Warehouses derivatives are constantly changing their evolution. This is because the types of queries to which they are subject are varied and it is impossible to foresee all in advance.
Improving Information Delivery: complete, correct, consistent, timely and accessible. Information that people need, in the time you need it and in the format you need.
Improve Decision Making Process: With more support information are obtained faster decisions, and also the business people acquire greater confidence in their own decisions and those of the rest, and achieved a greater understanding of the impacts of their decisions .
Positive Impact on Business Processes: when people are given access to a better quality of information, the company can achieve on its own:
   · Eliminate delays business processes resulting from incorrect, inconsistent and / or nonexistent.
   · Integrate and optimize business processes through sharing and integrated information sources.
   · Eliminate the production and processing of data that is not used or required as a result of poorly designed applications or no longer used.


Improving productivity and efficiency through a multistage implementation

Financial services firms can take an
existing inefficient infrastructure for
risk management and compliance
and gradually grow it into an integrated,
highly efficient grid system.
As shown in Figure 1, an existing
infrastructure may comprise stove
pipes of legacy applications disparate
islands of applications, tools
and compute and storage resources
with little to no communication among
them. A firm can start by enabling
one application a simulation application
for credit risk modeling, for
example to run faster by using grid
middleware to virtualize the compute
and storage resources supporting
that application.
The firm can extend the same solution
to another application, for example,
a simulation application used to
model market risk. Compute and storage
resources for both simulation
applications are virtualized by
extending the layer of grid middleware;
thus both applications can
share processing power, networked
storage and centralized scheduling.
Resiliency is achieved at the application
level through failover built into the
DataSynapse GridServer. If failure
occurs or the need to prioritize particular
analyses arises, one application
can pull unutilized resources that are
supporting the other application. This
process also facilitates communication
and collaboration across functional
areas and applications to provide
a better view of enterprise risk
exposure.
Alternatively, a firm can modernize by
grid-enabling a particular decision
engine. A decision engine, such as
one developed with Fair Isaac’s tools,
can deliver the agility of business
rules and the power of predictive analytic
models while leveraging the
power of the grid to execute decisions
in record time. This approach
guarantees that only the computeintensive
components are gridenabled
while simultaneously migrating
these components to technology
specifically designed for decision
components.
Over time, all applications can
become completely grid-enabled or
can share a common set of gridenabled
decision engines. All compute
and data resources become one
large resource pool for all the applications,
increasing the average utilization
rate of compute resources
from 2 to 50 percent in a heterogeneous
architecture to over 90 percent
in a grid architecture .
Based on priorities and rules,
DataSynapse GridServer automatically
matches application requests
with available resources in the distributed
infrastructure. This real-time brokering
of requests with available
resources enables applications to be
immediately serviced, driving greater
throughput. Application workloads
can be serviced in task units of milliseconds,
thus allowing applications
with run times in seconds to execute
in a mere fraction of a second. This
run-time reduction is crucial as banks
move from online to real-time processing,
which is required for functions
such as credit decisions made
at the point of trade execution.
Additionally, the run time of applications
that require hours to process,
such as end-of-day process and loss
reports on a credit portfolio, can be
reduced to minutes by leveraging this
throughput and resource allocation
strategy.

The workhorses of the IBM grid infrastructure

The workhorses of the IBM grid infrastructureare the grid engines
desktop PCs, workstations or servers
that run the UNIX, Microsoft
Windows or Linux operating systems
. These compute
resources execute various jobs submitted
to the grid, and have access
to a shared set of storage devices.
The IBM Grid Offering for Risk
Management and Compliance
relies on grid middleware from
DataSynapse to create distributed
sets of virtualized resources.
The production-proven, awardwinning
DataSynapse GridServer
application infrastructure platform
extends applications in real time to
operate in a distributed computing
environment across a virtual pool of
underutilized compute resources.
GridServer application interface modules
allow risk management and
compliance applications and nextgeneration
development of risk management
and compliance application
platforms to be grid-enabled.
IBM DB2 Information Integrator
enables companies to have integrated,
real-time access to structured
and unstructured information across
and beyond the enterprise. Critical to
the grid infrastructure, the software
accelerates risk and compliance analytics
applications that process massive
amounts of data for making
better informed decisions. DB2
Information Integrator provides
transparent access to any data
source, regardless of its location,
type or platform.

Real world, real successes in the 2013

IBM is the industry-leading supplier of
grid solutions, services and expertise
to the scientific and technical communities,
as well as to the financial
services sector. Leveraging its considerable
experience in implementing
commercial grids worldwide, IBM has
created targeted grid offerings customized
to meet the unique grid computing
needs of the financial services
industry. IBM Grid Computing is currently
engaged with more than 20
major financial institutions in North
America, Europe and Japan, and
more than 100 organizations
worldwide.
Wachovia worked with IBM and
DataSynapse to enhance the processing
speed of trading analytics in
the financial services company’s fixed
income derivatives group. Before
implementing a grid solution, profit
and loss reports and risk reports took
as long as 15 hours to run; now, grid
solution in place, Wachovia can turn
around mission-critical reports in minutes
on a real-time, intraday basis.
Moreover, trading volume increased
by 400 percent, and the number of
simulations by 2,500 percent. As a
result, the group can book larger,
more exotic and more lucrative trades
with more accurate risk taking.

2013 The importance of interaction analysis in CSCL

The importance of interaction analysis in CSCL
We know that these collaborative learning environments are characterized by a high degree of interaction with the system user, thereby generating a lot of action events. The action event management is a key issue in applications, since, on the one hand, the analysis of data obtained recogidosu real life online, collaborative learning situations also help important issues in the functioning of the group and collaborative learning process should be further understood that this can guide both the design workspace more functional and software components, as well the development of improved facilities such as awareness, feedback, monitoring space work, evaluation and monitoring of the work of the group by a coordinator, tutor, etc.. Indeed, data filtering, proper management of events allows the establishment of a list of parameters that can be used to analyze the group's activities space (eg tutor-to-group or member to member communication flow, asynchronism in the space group, etc.). These parameters allow the efficiency of the group's activities for better performance and group and individual attitudes of its members in the shared workspace that was predicted.
Furthermore, application design will be necessary for this purpose organize and manage both the resources offered by the system and the users accessing these resources. All this user-and resource-user interaction user generates events or "logs" to be found in the log files and represent the information base for conducting statistical process aimed at obtaining knowledge of the system. This will facilitate collaborative learning process by keeping users abreast of what is happening in the system (for example, the contributions of others, documents created, etc.) and control user behavior in order to provide support (eg, help students who are unable to perform a task on your own). Therefore, the user-user and user interaction is critical resource in any collaborative learning environment
to enable groups of students to communicate with each other and achieve common goals (eg, a classroom activity in collaboration).
Although user interaction is the most important point to be managed in applications, it is usually also important to be able to monitor and control the performance and overall system performance. This allows the administrator to continuously monitor critical parts of the system and act as necessary. Moreover, it adds a layer of security implied what already exists (for example, user habits of control "to detect fraudulent use of the system unauthorized users).
To effectively communicate the knowledge gained from the activity of the users group in terms of knowledge and feedback, CSCL applications should provide full support to the above three aspects are essential in all applications collaboration, namely, coordination, communication and collaboration in order to create virtual environments where students, teachers, tutors, etc. are able to cooperate with each other to achieve a common goal of learning. Coordination involves the organization of the group in order to achieve the objectives set and monitor user activity, which is possible by maintaining awareness of the participants.
The communication relates to the communication medium basically messages between users within and between groups and may be in both synchronous and asynchronous modes. Finally, the partnership allows members of the group share all kinds of resources, which is also found in synchronous and asynchronous modes. Both coordination and collaboration and communication will generate many events that will be communicated to users after these facts have been manipulated and analyzed in order to provide users with as much awareness as possible immediate and constant flow as
possible feedback.