Skip to main content
    In this chapter, we provide the most significant asymptotic results concerning the existence of optimal codes for noisy channels. It is proven that the Shannon’s amount of information is a bound on Hartley’s amount of information... more
    In this chapter, we provide the most significant asymptotic results concerning the existence of optimal codes for noisy channels. It is proven that the Shannon’s amount of information is a bound on Hartley’s amount of information transmitted with asymptotic zero probability of error. This is the meaning of the second asymptotic theorem. Further we provide formulae showing how quickly the probability of error for decoding decreases when the block length increases. Contrary to the conventional approach, we represent the above results not in terms of channel capacity (i.e., we do not perform the maximization of the limit amount of information with respect to the probability density of the input variable), but in terms of Shannon’s amount of information.
    In this chapter, the general theory concerning the value of Shannon’s information, covered in the previous chapter, will be applied to a number of important practical cases of Bayesian systems. For these systems, we derive explicit... more
    In this chapter, the general theory concerning the value of Shannon’s information, covered in the previous chapter, will be applied to a number of important practical cases of Bayesian systems. For these systems, we derive explicit expressions for the potential Γ(β), which allows us to find a dependency in a parametric form between losses (risk) R and the amount of information I and then, eventually, to find the value function V (I).
    In this chapter, we discuss a relation between the concept of the amount of information and that of physical entropy. As is well known, the latter allows us to express quantitatively the second law of thermodynamics, which forbids, in an... more
    In this chapter, we discuss a relation between the concept of the amount of information and that of physical entropy. As is well known, the latter allows us to express quantitatively the second law of thermodynamics, which forbids, in an isolated system, the existence of processes accompanied by an increase of entropy. If there exists an influx of information dI about the system, i.e. if the physical system is isolated only thermally, but not informationally, then the above law should be generalized by substituting inequality dH ≥ 0 with inequality dH + dI ≥ 0. Therefore, if there is an influx of information, then the thermal energy of the system can be converted (without the help of a refrigerator) into mechanical energy. In other words, the existence of perpetual motion of the second kind powered by information becomes possible.
    In the previous chapter, for one particular example (see Sections 3.1 and 3.4) we showed that in calculating the maximum entropy (i.e. the capacity of a noiseless channel) the constraint \(c(y) \leqslant a\) imposed on feasible... more
    In the previous chapter, for one particular example (see Sections 3.1 and 3.4) we showed that in calculating the maximum entropy (i.e. the capacity of a noiseless channel) the constraint \(c(y) \leqslant a\) imposed on feasible realizations is equivalent, for a sufficiently long code sequence, to the constraint \(\mathbb {E}[c(y)] \leqslant a\) on the mean value \(\mathbb {E}[c(y)]\). In this chapter we prove (Section 4.3) that under certain assumptions such equivalence takes place in the general case; this is the assertion of the first asymptotic theorem. In what follows, we shall also consider the other two asymptotic theorems (Chapters 7 and 11), which are the most profound results of information theory. All of them have the following feature in common: ultimately all these theorems state that, for utmost large systems, the difference between the concepts of discreteness and continuity disappears, and that the characteristics of a large collection of discrete objects can be calculated using a continuous functional dependence involving averaged quantities. For the first variational problem, this feature is expressed by the fact that the discrete function \(H = \ln M\) of a, which exists under the constraint c(y) ≤ a, is asymptotically replaced by a continuous function H(a) calculated by solving the first variational problem. As far as the proof is concerned, the first asymptotic theorem turns out to be related to the theorem on canonical distribution stability (Section 4.2), which is very important in statistical thermodynamics and which is actually proved there when the canonical distribution is derived from the microcanonical one. Here we consider it in a more general and abstract form. The relationship between the first asymptotic theorem and the theorem on the canonical distribution once more underlines the intrinsic unity of the mathematical apparatus of information theory and statistical thermodynamics.
    The concept of the value of information, introduced in this chapter, connects Shannon’s information theory with statistical decision theory. In the latter theory, the most basic is the notion of average cost or risk, which characterizes... more
    The concept of the value of information, introduced in this chapter, connects Shannon’s information theory with statistical decision theory. In the latter theory, the most basic is the notion of average cost or risk, which characterizes the quality of decisions being made. The value of information can be described as the maximum benefit that can be gained in the process of minimizing the average cost with the help of a given amount of information. Such a definition of the value of information turns out to be related to the formulation and the solution of certain conditional variational problems.
    Spike trains and local field potentials (LFPs) are two different manifestations of neural activity recorded simultaneously from the same electrode array and contain complementary information of stimuli or behaviors. This paper proposes a... more
    Spike trains and local field potentials (LFPs) are two different manifestations of neural activity recorded simultaneously from the same electrode array and contain complementary information of stimuli or behaviors. This paper proposes a tensor product kernel based decoder, which allows modeling the sample from different sources individually and mapping them onto the same reproducing kernel Hilbert space (RKHS) defined by the tensor product of the individual kernels for each source, where linear regression is conducted to identify the nonlinear mapping from the multi-type neural responses to the stimuli. The decoding results of the rat sensory stimulation experiment show that the tensor-product-kernel-based decoder outperforms the decoders with either single-type neural activities.
    ABSTRACT Precise control of neural circuits via microstimulation is an indispensable but challenging objective in neuro-engineering. The effect of electrical stimulation is imprecise and has a spatio-temporal blurring. At the neuron... more
    ABSTRACT Precise control of neural circuits via microstimulation is an indispensable but challenging objective in neuro-engineering. The effect of electrical stimulation is imprecise and has a spatio-temporal blurring. At the neuron level, the effects are obfuscated by the complexity of neural dynamics. This paper proposes an online multiple-input-multiple-output (MIMO) adaptive inverse controller for somatosensory microstimulation. The control of the target firing pattern is achieved by including an adaptive controller before the stimulator whose transfer function is always adjusted to be the inverse of the neural circuit transfer function. In this paper a synthetic neural circuit is built from LIF neurons to model the neural circuit. Considering a Poisson model for the target spike train, we identify the LIF neural model using a Generalized Linear Model (GLM) fitted with a maximum likelihood (ML) criterion. The controller architecture becomes the inverse of the GLM and its parameters are periodically adjusted to ensure that the input to the LIF model approximates the target spike time response. In synthetic data, the results show that this control scheme successfully determines the impulse timing and amplitude of the desired stimuli and drives the dynamic neural circuit output to follow the target firing pattern. With the simulated model, the method is able to preserve the temporal precision of neural spike trains.
    The Perception-Action Cycle (PAC) is a central component of goal-directed behavior because it links internal percepts with external outcomes in the environment. Using inspiration from the PAC, we are developing a Brain-Machine Interface... more
    The Perception-Action Cycle (PAC) is a central component of goal-directed behavior because it links internal percepts with external outcomes in the environment. Using inspiration from the PAC, we are developing a Brain-Machine Interface control architecture that utilizes both motor commands and goal information directly from the brain to navigate to novel targets in an environment. An Actor-Critic algorithm was selected for decoding the neural motor commands because it is a PAC-based computational framework where the perception component is implemented in the critic structure and the actor is responsible for taking actions. We develop in this work a biologically realistic simulator to analyze the performance of the decoder in terms of convergence and target acquisition. Experience from the simulator will guide parameter selection and assist in understanding the architecture before animal experiments. By varying the signal to noise ratio of the neural input and error signal, we were able to demonstrate how the learning rate and initial conditions affect a motor control target selection task. In this framework, the naïve decoder was able to reach targets in the presence of noise in the error signal and neural motor command with 98% accuracy.

    And 1046 more