Monday, 30 September 2013

Rational Functions and its geometric Notions

A function f(x) is called a rational function if and only if it can be written in the form
 f(x) = \frac{P(x)}{Q(x)}
where P\, and Q\, are polynomials in x\, and Q\, is not the zero polynomial. The domain of f\, is the set of all points x\, for which the denominator Q(x)\, is not zero, assuming \textstyle P and \textstyle Q have no common factors.

In abstract algebra the concept of a polynomial is extended to include formal expressions in which the coefficients of the polynomial can be taken from any field. In this setting given a field F and some indeterminate X, a rational expression is any element of the field of fractions of the polynomial ring F[X]. Any rational expression can be written as the quotient of two polynomials P/Q with Q ≠ 0, although this representation isn't unique. P/Q is equivalent to R/S, for polynomials P, Q, R, and S, when PS = QR. However since F[X] is a unique factorization domain, there is a unique representation for any rational expression P/Q with P and Q polynomials of lowest degree and Q chosen to be monic. This is similar to how a fraction of integers can always be written uniquely in lowest terms by canceling out common factors.
The field of rational expressions is denoted F(X). This field is said to be generated (as a field) over F by (a transcendental element) X, because F(X) does not contain any proper subfield containing both F and the element X.

Complex rational functions

In complex analysis, a rational function
f(z) = \frac{P(z)}{Q(z)}
is the ratio of two polynomials with complex coefficients, where Q is not the zero polynomial and P and Q have no common factor (this avoids f taking the indeterminate value 0/0). The domain and range of f are usually taken to be the Riemann sphere, which avoids any need for special treatment at the poles of the function (where Q(z) is 0).
The degree of a rational function is the maximum of the degrees of its constituent polynomials P and Q. If the degree of f is d, then the equation
f(z) = w \,
has d distinct solutions in z except for certain values of w, called critical values, where two or more solutions coincide. The function f can therefore be thought of as a d-fold covering of the w-sphere by the z-sphere.
Rational functions with degree 1 are called Möbius transformations and form the automorphisms group of the Riemann sphere. Rational functions are representative examples of meromorphic functions.

Notion of a rational function on an algebraic variety

Like polynomials, rational expressions can also be generalized to n indeterminates X1,..., Xn, by taking the field of fractions of F[X1,..., Xn], which is denoted by F(X1,..., Xn).
An extended version of the abstract idea of rational function is used in algebraic geometry. There the function field of an algebraic variety V is formed as the field of fractions of the coordinate ring of V (more accurately said, of a Zariski-dense affine open set in V). Its elements f are considered as regular functions in the sense of algebraic geometry on non-empty open sets U, and also may be seen as morphisms to the projective line.

To Join Ajit Mishra's Online Classroom CLICK HERE 

Thursday, 26 September 2013

Basics Of Set Theory

Set theory is the branch of mathematical logic that studies sets, which are collections of objects. Although any type of object can be collected into a set, set theory is applied most often to objects that are relevant to mathematics. The language of set theory can be used in the definitions of nearly all mathematical objects.


 Basic concepts and notation

Set theory begins with a fundamental binary relation between an object o and a set A. If o is a member (or element) of A, write oA. Since sets are objects, the membership relation can relate sets as well.
A derived binary relation between two sets is the subset relation, also called set inclusion. If all the members of set A are also members of set B, then A is a subset of B, denoted AB. For example, {1,2} is a subset of {1,2,3} , but {1,4} is not. From this definition, it is clear that a set is a subset of itself; for cases where one wishes to rule out this, the term proper subset is defined. A is called a proper subset of B if and only if A is a subset of B, but B is not a subset of A.
Just as arithmetic features binary operations on numbers, set theory features binary operations on sets. The:

  • Union of the sets A and B, denoted AB, is the set of all objects that are a member of A, or B, or both. The union of {1, 2, 3} and {2, 3, 4} is the set {1, 2, 3, 4} .
  • Intersection of the sets A and B, denoted AB, is the set of all objects that are members of both A and B. The intersection of {1, 2, 3} and {2, 3, 4} is the set {2, 3} .
  • Set difference of U and A, denoted U \ A, is the set of all members of U that are not members of A. The set difference {1,2,3} \ {2,3,4} is {1} , while, conversely, the set difference {2,3,4} \ {1,2,3} is {4} . When A is a subset of U, the set difference U \ A is also called the complement of A in U. In this case, if the choice of U is clear from the context, the notation Ac is sometimes used instead of U \ A, particularly if U is a universal set as in the study of Venn diagrams.
  • Symmetric difference of sets A and B, denoted AB or AB, is the set of all objects that are a member of exactly one of A and B (elements which are in one of the sets, but not in both). For instance, for the sets {1,2,3} and {2,3,4} , the symmetric difference set is {1,4} . It is the set difference of the union and the intersection, (AB) \ (AB) or (A \ B) ∪ (B \ A).
  • Cartesian product of A and B, denoted A × B, is the set whose members are all possible ordered pairs (a,b) where a is a member of A and b is a member of B. The cartesian product of {1, 2} and {red, white} is {(1, red), (1, white), (2, red), (2, white)}.
  • Power set of a set A is the set whose members are all possible subsets of A. For example, the power set of {1, 2} is { {}, {1}, {2}, {1,2} } .
Some basic sets of central importance are the empty set (the unique set containing no elements), the set of natural numbers, and the set of real numbers.

To Join Ajit Mishra's Online Classroom CLICK HERE 

Tuesday, 24 September 2013

Group Algebras of Topological Groups

In mathematics, the group algebra is any of various constructions to assign to a locally compact group an operator algebra (or more generally a Banach algebra), such that representations of the algebra are related to representations of the group. As such, they are similar to the group ring associated to a discrete group.



Group algebras of topological groups: Cc(G)

For the purposes of functional analysis, and in particular of harmonic analysis, one wishes to carry over the group ring construction to topological groups G. In case G is a locally compact Hausdorff group, G carries an essentially unique left-invariant countably additive Borel measure μ called Haar measure. Using the Haar measure, one can define a convolution operation on the space Cc(G) of complex-valued continuous functions on G with compact support; Cc(G) can then be given any of various norms and the completion will be a group algebra.
To define the convolution operation, let f and g be two functions in Cc(G). For t in G, define
 [f * g](t) = \int_G f(s) g(s^{-1} t)\, d \mu(s). \quad
The fact that f * g is continuous is immediate from the dominated convergence theorem. Also

 \operatorname{Support}(f * g) \subseteq \operatorname{Support}(f) \cdot  \operatorname{Support}(g)
were the dot stands for the product in G. Cc(G) also has a natural involution defined by:
  f^*(s) = \overline{f(s^{-1})} \Delta(s^{-1})
where Δ is the modular function on G. With this involution, it is a *-algebra.
Theorem. If Cc(G) is given the norm
 \|f\|_1 := \int_G |f(s)| d\mu(s), \quad it becomes is an involutive normed algebra with an approximate identity.
The approximate identity can be indexed on a neighborhood basis of the identity consisting of compact sets. Indeed if V is a compact neighborhood of the identity, let fV be a non-negative continuous function supported in V such that
 \int_V f_{V}(g)\, d \mu(g) =1. \quad
Then {fV}V is an approximate identity. A group algebra has an identity, as opposed to just an approximate identity, if and only if the topology on the group is the discrete topology.
Note that for discrete groups, Cc(G) is the same thing as the complex group ring CG.
The importance of the group algebra is that it captures the unitary representation theory of G as shown in the following
Theorem. Let G be a locally compact group. If U is a strongly continuous unitary representation of G on a Hilbert space H, then
 \pi_U (f) = \int_G f(g) U(g)\, d \mu(g) \quad
is a non-degenerate bounded *-representation of the normed algebra Cc(G). The map

 U \mapsto \pi_U \quad
is a bijection between the set of strongly continuous unitary representations of G and non-degenerate bounded *-representations of Cc(G). This bijection respects unitary equivalence and strong containment. In particular, πU is irreducible if and only if U is irreducible.
Non-degeneracy of a representation π of Cc(G) on a Hilbert space Hπ means that
 \{\pi(f) \xi: f \in \operatorname{C}_c(G), \xi \in H_\pi \}
is dense in Hπ.

To Join Ajit Mishra's Online Classroom CLICK HERE 

Saturday, 21 September 2013

Something About Factorization of Polynomials

In mathematics and computer algebra the factorization of a polynomial consists in decomposing it in a product of irreducible factors. This decomposition is theoretically possible and is unique for polynomials with coefficients in any field, but rather strong restrictions on the field of the coefficients are needed to allow to compute this factorization by mean of an algorithm. In practice, algorithms have been designed only for polynomials with coefficients in a finite field, in the field of rationals or in a finitely generated field extension of one of them.
The case of the factorization of univariate polynomials over a finite field, which is the object of this article, is especially important, because all the algorithms (including the case of multivariate polynomials over the rational numbers), which are sufficiently efficient to be implemented, reduce the problem to this case (see Polynomial factorization). It is also interesting for various applications of finite fields such as coding theory (cyclic redundancy codes and BCH codes), cryptography (public key cryptography by mean of elliptic curves), and computational number theory.
Since the second half of twentieth century, major improvements have been made to this problem that results in the fact that factoring a polynomial of degree 1000 over a medium sized prime field (less than one million of elements) is now a routine task.
As the reduction of the factorization of multivariate polynomials to that of univariate polynomials does not have any specificity in the case of coefficients in a finite field, only polynomials with one variable are considered in this article.


Finite field

The theory of finite fields, whose origins can be traced back to the works of Gauss and Galois, has played a part in various branches of mathematics. Due to the applicability of the concept in other topics of mathematics and sciences like computer science there has been a resurgence of interest in finite fields and this is partly due to important applications in coding theory and cryptography. Applications of finite fields introduce some of these developments in cryptography, computer algebra and coding theory.
A finite field or Galois field is a field with a finite order (number of elements). The order of a finite field is always a prime or a power of prime. For each prime power q = pr, there exists exactly one finite field with q elements, up to isomorphism. This field is denoted GF(q) or Fq. If p is prime, GF(p) is the prime field of order p; it is the field of residue classes modulo p, and its p elements are denoted 0, 1, ..., p−1. Thus a = b in GF(p) means the same as ab mod p.

Irreducible polynomials

Let F be a finite field. As for general fields, a non-constant polynomial f in F[x] is said to be irreducible over F if it is not the product of two polynomials of positive degree. A polynomial of positive degree that is not irreducible over F is called reducible over F.
Irreducible polynomials allow to construct the finite fields of non prime order. In fact, for a prime power q, let Fq be the finite field with q elements, unique up to an isomorphism. A polynomial f of degree n greater than one, which is irreducible over Fq, defines a field extension of degree n which is isomorphic to the field with qn elements: the elements of this extension are the polynomials of degree lower than n; addition, subtraction and multiplication by an element of Fq are those of the polynomials; the product of two elements it the remainder of the division by f of their product as polynomials; the inverse of an element may be computed by the extended GCD algorithm (see Arithmetic of algebraic extensions).
It follows that, to compute in a finite field of non prime order, one needs to generate an irreducible polynomial. For this, the common method is to take a polynomial at random and test it for irreducibility. For sake of efficiency of the multiplication in the field, it is usual to search for polynomials of the shape xn + ax + b.
Irreducible polynomials over finite fields are also useful for Pseudorandom number generators using feedback shift registers and discrete logarithm over F2n.

Example

The polynomial P = x4 + 1 is irreducible over Q but not over any finite field.
  • On any field extension of F2, P = (x+1)4.
  • On every other finite field, at least one of −1, 2 and −2 is a square, because the product of two non squares is a square and so we have
  1. If -1=a^2, then P=(x^2+a)(x^2-a).
  2. If 2=b^2, then P=(x^2+bx+1)(x^2-bx+1).
  3. If -2=c^2, then P=(x^2+cx-1)(x^2-cx-1).    

 

Complexity

Polynomial factoring algorithms use basic polynomial operations such as products, divisions, gcd, powers of one polynomial modulo another, etc. A multiplication of two polynomials of degree at most n can be done in O(n2) operations in Fq using "classical" arithmetic, or in O(nlog(n)) operations in Fq using "fast" arithmetic. A Euclidean division (division with remainder) can be performed within the same time bounds. The cost of a polynomial greatest common divisor between two polynomials of degree at most n can be taken as O(n2) operations in Fq using classical methods, or as O(nlog2(n)) operations in Fq using fast methods. For polynomials h, g of degree at most n, the exponentiation hq mod g can be done with O(log(q)) polynomial products, using exponentiation by squaring method, that is O(n2log(q)) operations in Fq using classical methods, or O(nlog(q)log(n)) operations in Fq using fast methods.
In the algorithms that follow, the complexities are expressed in terms of number of arithmetic operations in Fq, using classical algorithms for the arithmetic of polynomials. 

To Join Ajit Mishra's Online Classroom CLICK HERE

Wednesday, 18 September 2013

Identity Without Variables in Trigonometory

Identities without variables

The curious identity
\cos 20^\circ\cdot\cos 40^\circ\cdot\cos 80^\circ=\frac{1}{8}
is a special case of an identity that contains one variable:
\prod_{j=0}^{k-1}\cos(2^j x)=\frac{\sin(2^k x)}{2^k\sin(x)}.
Similarly:
\sin 20^\circ\cdot\sin 40^\circ\cdot\sin 80^\circ=\frac{\sqrt{3}}{8}.
The same cosine identity in radians is
 \cos\frac{\pi}{9}\cos\frac{2\pi}{9}\cos\frac{4\pi}{9} = \frac{1}{8},
Similarly:
\tan 50^\circ\cdot\tan 60^\circ\cdot\tan 70^\circ=\tan 80^\circ.
\tan 40^\circ\cdot\tan 30^\circ\cdot\tan 20^\circ=\tan 10^\circ.
The following is perhaps not as readily generalized to an identity containing variables (but see explanation below):
\cos 24^\circ+\cos 48^\circ+\cos 96^\circ+\cos 168^\circ=\frac{1}{2}.
Degree measure ceases to be more felicitous than radian measure when we consider this identity with 21 in the denominators:

\begin{align}
& \cos\left( \frac{2\pi}{21}\right)
  + \cos\left(2\cdot\frac{2\pi}{21}\right)   
  + \cos\left(4\cdot\frac{2\pi}{21}\right) \\[10pt]
& {} \qquad {} + \cos\left( 5\cdot\frac{2\pi}{21}\right)
  + \cos\left( 8\cdot\frac{2\pi}{21}\right)
  + \cos\left(10\cdot\frac{2\pi}{21}\right)=\frac{1}{2}.
\end{align}
The factors 1, 2, 4, 5, 8, 10 may start to make the pattern clear: they are those integers less than 21/2 that are relatively prime to (or have no prime factors in common with) 21. The last several examples are corollaries of a basic fact about the irreducible cyclotomic polynomials: the cosines are the real parts of the zeroes of those polynomials; the sum of the zeroes is the Möbius function evaluated at (in the very last case above) 21; only half of the zeroes are present above. The two identities preceding this last one arise in the same fashion with 21 replaced by 10 and 15, respectively.
Many of those curious identities stem from more general facts like the following:
 \prod_{k=1}^{n-1} \sin\left(\frac{k\pi}{n}\right) = \frac{n}{2^{n-1}}
and
 \prod_{k=1}^{n-1} \cos\left(\frac{k\pi}{n}\right) = \frac{\sin(\pi n/2)}{2^{n-1}}
Combining these gives us
 \prod_{k=1}^{n-1} \tan\left(\frac{k\pi}{n}\right) = \frac{n}{\sin(\pi n/2)}
If n is an odd number (n = 2m + 1) we can make use of the symmetries to get
 \prod_{k=1}^{m} \tan\left(\frac{k\pi}{2m+1}\right) = \sqrt{2m+1}
The transfer function of the Butterworth low pass filter can be expressed in terms of polynomial and poles. By setting the frequency as the cutoff frequency, the following identity can be proved:
 \prod_{k=1}^{n} \sin\left(\frac{\left(2k-1\right)\pi}{4n}\right) = \prod_{k=1}^{n} \cos\left(\frac{\left(2k-1\right)\pi}{4n}\right) = \frac{\sqrt{2}}{2^{n}}

Computing π

An efficient way to compute π is based on the following identity without variables, due to Machin:
\frac{\pi}{4} = 4 \arctan\frac{1}{5} - \arctan\frac{1}{239}
or, alternatively, by using an identity of Leonhard Euler:
\frac{\pi}{4} = 5 \arctan\frac{1}{7} + 2 \arctan\frac{3}{79}.

A useful mnemonic for certain values of sines and cosines

For certain simple angles, the sines and cosines take the form \scriptstyle\sqrt{n}/2 for 0 ≤ n ≤ 4, which makes them easy to remember.

\begin{matrix}
\sin 0 & = & \sin 0^\circ & = & \sqrt{0}/2 & = & \cos 90^\circ &  =  & \cos \left( \frac {\pi} {2} \right) \\  \\
\sin \left( \frac {\pi} {6} \right) & = & \sin 30^\circ & = & \sqrt{1}/2 & = & \cos 60^\circ & = & \cos \left( \frac {\pi} {3} \right) \\  \\
\sin \left( \frac {\pi} {4} \right) & = & \sin 45^\circ & = & \sqrt{2}/2 & = & \cos 45^\circ & = & \cos \left( \frac {\pi} {4} \right) \\  \\
\sin \left( \frac {\pi} {3} \right) & = & \sin 60^\circ & = & \sqrt{3}/2 & = & \cos 30^\circ & = & \cos \left( \frac {\pi} {6} \right)\\  \\
\sin \left( \frac {\pi} {2} \right) & = & \sin 90^\circ & = & \sqrt{4}/2 & = & \cos 0^\circ & = & \cos 0 
\end{matrix}

Miscellany

With the golden ratio φ:
\cos \left( \frac {\pi} {5} \right) = \cos 36^\circ={\sqrt{5}+1 \over 4} = \frac{\varphi }{2}
\sin \left( \frac {\pi} {10} \right) = \sin 18^\circ = {\sqrt{5}-1 \over 4}  = {\varphi - 1 \over 2} = {1 \over 2\varphi}
Also see exact trigonometric constants.

An identity of Euclid

Euclid showed in Book XIII, Proposition 10 of his Elements that the area of the square on the side of a regular pentagon inscribed in a circle is equal to the sum of the areas of the squares on the sides of the regular hexagon and the regular decagon inscribed in the same circle. In the language of modern trigonometry, this says:
\sin^2(18^\circ)+\sin^2(30^\circ)=\sin^2(36^\circ). \,
Ptolemy used this proposition to compute some angles in his table of chords.

To Join Ajit Mishra's Online Classroom CLICK HERE 

Monday, 16 September 2013

Square Matrices in Mathematics

Square matrices

A square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order n. Any two square matrices of the same order can be added and multiplied. The entries aii form the main diagonal of a square matrix. They lie on the imaginary line which runs from the top left corner to the bottom right corner of the matrix.

Main types

Name Example with n = 3
Diagonal matrix 
      \begin{bmatrix}
           a_{11} & 0 & 0 \\
           0 & a_{22} & 0 \\
           0 & 0 & a_{33} \\
      \end{bmatrix}
Lower triangular matrix 
      \begin{bmatrix}
           a_{11} & 0 & 0 \\
           a_{21} & a_{22} & 0 \\
           a_{31} & a_{32} & a_{33} \\
      \end{bmatrix}
Upper triangular matrix 
      \begin{bmatrix}
           a_{11} & a_{12} & a_{13} \\
           0 & a_{22} & a_{23} \\
           0 & 0 & a_{33} \\
      \end{bmatrix}

Diagonal or triangular matrix

If all entries outside the main diagonal are zero, A is called a diagonal matrix. If only all entries above (or below) the main diagonal are zero, A is called a lower (or upper) triangular matrix.

Identity matrix

The identity matrix In of size n is the n-by-n matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0, e.g.

I_1 = \begin{bmatrix} 1 \end{bmatrix}
,\ 
I_2 = \begin{bmatrix}
         1 & 0 \\
         0 & 1 
      \end{bmatrix}
,\ \cdots ,\ 
I_n = \begin{bmatrix}
         1 & 0 & \cdots & 0 \\
         0 & 1 & \cdots & 0 \\
         \vdots & \vdots & \ddots & \vdots \\
         0 & 0 & \cdots & 1
      \end{bmatrix}
It is a square matrix of order n, and also a special kind of diagonal matrix. It is called identity matrix because multiplication with it leaves a matrix unchanged:
AIn = ImA = A for any m-by-n matrix A.

Symmetric or skew-symmetric matrix

A square matrix A that is equal to its transpose, i.e., A = AT, is a symmetric matrix. If instead, A was equal to the negative of its transpose, i.e., A = −AT, then A is a skew-symmetric matrix. In complex matrices, symmetry is often replaced by the concept of Hermitian matrices, which satisfy A = A, where the star or asterisk denotes the conjugate transpose of the matrix, i.e., the transpose of the complex conjugate of A.
By the spectral theorem, real symmetric matrices and complex Hermitian matrices have an eigenbasis; i.e., every vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real. This theorem can be generalized to infinite-dimensional situations related to matrices with infinitely many rows and columns, see below.

Invertible matrix and its inverse

A square matrix A is called invertible or non-singular if there exists a matrix B such that
AB = BA = In.
If B exists, it is unique and is called the inverse matrix of A, denoted A−1.

Definite matrix

Positive definite matrix Indefinite matrix
 \begin{bmatrix}
         1/4 & 0 \\
         0 & 1 \\
     \end{bmatrix}  \begin{bmatrix}
         1/4 & 0 \\
         0 & -1/4 
     \end{bmatrix}
Q(x,y) = 1/4 x2 + y2 Q(x,y) = 1/4 x2 − 1/4 y2
Ellipse in coordinate system with semi-axes labelled.svg
Points such that Q(x,y)=1
(Ellipse).
Hyperbola2 SVG.svg
Points such that Q(x,y)=1
(Hyperbola).
A symmetric n×n-matrix is called positive-definite (respectively negative-definite; indefinite), if for all nonzero vectors x ∈ Rn the associated quadratic form given by
Q(x) = xTAx
takes only positive values (respectively only negative values; both some negative and some positive values). If the quadratic form takes only non-negative (respectively only non-positive) values, the symmetric matrix is called positive-semidefinite (respectively negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite.
A symmetric matrix is positive-definite if and only if all its eigenvalues are positive. The table at the right shows two possibilities for 2-by-2 matrices.
Allowing as input two different vectors instead yields the bilinear form associated to A:
BA (x, y) = xTAy.

Orthogonal matrix

An orthogonal matrix is a square matrix with real entries whose columns and rows are orthogonal unit vectors (i.e., orthonormal vectors). Equivalently, a matrix A is orthogonal if its transpose is equal to its inverse:
A^\mathrm{T}=A^{-1}, \,
which entails
A^\mathrm{T} A = A A^\mathrm{T} = I, \,
where I is the identity matrix.
An orthogonal matrix A is necessarily invertible (with inverse A−1 = AT), unitary (A−1 = A*), and normal (A*A = AA*). The determinant of any orthogonal matrix is either +1 or −1. A special orthogonal matrix is an orthogonal matrix with determinant +1. As a linear transformation, every orthogonal matrix with determinant +1 is a pure rotation, while every orthogonal matrix with determinant -1 is either a pure reflection, or a composition of reflection and rotation.
The complex analogue of an orthogonal matrix is a unitary matrix.

To Join Ajit Mishra's Online Classroom CLICK HERE

Our Latest Post

How to Evaluate an Integral

In this video you will see how to evaluate an Integral. This video shows an example, by this example you will see about how to evaluate an ...

Popular Post