The post The two kinds of random number generators most applications need appeared first on Blog for The Ultimate Pokemon Center.

]]>- Unpredictable random generators (also known as "cryptographically strong" or "cryptographically secure") are needed by some applications for data security, usually in encryption or secure communication protocols, and in some online games. The goal here is to keep the random numbers from being guessed easily.
- Statistically random generators are needed by simulations and many games to bring an element of chance and variation to the application. The goal here is for each possible outcome to be equally likely. Statistically random generators can simulate dice, playing card shuffling, etc.

The following is a sketch of a C programming interface that specifies two functions that could meet these two needs.

```
int random(byte[] bytes, size_t size);
```

Generates random bits using an unpredictable-random implementation.

**Quality**: An unpredictable-random implementation generates random bits that are unpredictable.**Predictability**: “Unpredictable” means that an outside party with knowledge of the implementation’s internal state at a certain point of time can guess neither prior unseen bits nor future bits of the random sequence correctly with more than a 50% chance per bit, even with many months of effort and even if the random algorithm used and extremely many outputs of the implementation are known.**Seeding**: If the implementation uses a deterministic algorithm to generate random numbers, its internal state must be*seeded*with unpredictable data. The internal state (the part of the algorithm’s state that can be initialized with arbitrary data) must be at least 128 bits and should be at least 238 bits (the length in bits of 54 factorial). The seed must be at least the same size as the internal state.**Reseeding**: The internal state should be reseeded from time to time to ensure each bit of the output is unpredictable even if an outside party somehow gains knowledge of its internal state.**Speed**: The implementation should select algorithms that are reasonably fast for most applications.**Time Complexity**: The implementation must run in amortized linear time on the size of the output array.**Thread Safety**: The implementation should be safe for concurrent use by multiple threads.**Examples**: The “/dev/urandom” device on many Unix-based operating systems; CryptGenRandom function on Windows; the CryptGenRandom function on Windows; cryptographic hash functions that take unpredictable signals as input (such as disk access and keystroke timings).

"bytes" is a pointer to a byte array, "size" is the number of random bytes to generate. Each bit in each byte will be randomly set to 0 or 1. Returns 0 if the function succeeds, and nonzero otherwise.

```
int random_fast(byte[] bytes, size_t size);
```

Generates random bits using a statistical-random implementation.

**Quality**: A statistical-random implementation generates random bits that, theoretically, are uniformly randomly chosen independently of the other bits. The implementation must be almost certain to pass simple statistical randomness tests and many complex ones. (For example, any RNG algorithm that passes the three test batteries in TestU01 [Lâ€™Ecuyer and Simard 2007] meets these requirements.)**Predictability**: The implementationâ€™s output must not be trivially predictable.**Seeding**: The implementation must be seeded with information not known*a priori*by the implementation, such as random bits from an unpredictable-random implementation. As far as practical, the seed should not be trivially predictable. The internal state must be at least 64 bits and should be at least 128 bits. The seed must be at least the same size as the internal state.**Reseeding**: The implementation may reseed the internal state from time to time. It should do so if its internal state’s size is less than 238 bits. If the implementation reseeds, it should do so before it generates more values than the square root of the RNG’s period.**Speed**: The implementation should select algorithms that are reasonably fast for most applications. The implementation may instead use an unpredictable-random implementation as long as the function remains at least as fast, in the average case, as the statistical-random implementation it would otherwise use.**Time Complexity**: The implementation must run in amortized linear time on the size of the output array.**Thread Safety**: The implementation should be safe for concurrent use by multiple threads.**Examples**: The “xorshift128+” and Lehmer128 random number generators.

"bytes" is a pointer to a byte array, "size" is the number of random bytes to generate. Each bit in each byte will be randomly set to 0 or 1. Returns 0 if the function succeeds, and nonzero otherwise.

A deterministic random number generator (DRNG) can’t generate more permutations of random number sequences than its period. A DRNG can’t generate all permutations of a list if the factorial of its size is greater than the DRNG’s period. This means that the items in a shuffled list of that size will never appear in certain orders when that DRNG is used to shuffle it.

A DRNG with period 2^{64}, for example, can’t generate all permutations of a list with more than 20 items; with period 2^{128}, more than 34 items; with period 2^{226}, more than 52 items; and with period 2^{256}, more than 57 items.

When shuffling more than 20 items, a concerned application would be well advised to use an unpredictable-random implementation.

In addition, some applications generate results based on apparently-random principles, starting from a known initial state. One notable example is a “code”, or password, for generating a particular game level in some role-playing games.

Functions for seeding random number algorithms are not included, because applications that require seeding usually care about reproducible results. Such applications often need to keep not only the RNG algorithm stable, but also any algorithm that uses that RNG algorithm (such as a game level generator), especially if it publishes seeds (for example, game level passwords). Moreover, which RNG algorithm to use for such purpose depends on the application. But here are some recommendations:

- Any RNG algorithm selected for producing reproducible results should meet at least the quality and predictability requirements of a statistical-random implementation, and should be reasonably fast.
- Any seed passed as input to that algorithm should be 64 bits or greater.

An application should only use seeding if–

- the initial state (the seed) which the “random” result will be generated from is known (for example, because it’s hard-coded or because it was entered by the user),
- the application needs to generate the same “random” result multiple times,
- it would be impractical to convey that “random” result without relying on seeding, such as–
- by saving the result to a file, or
- by distributing the results or the random numbers to networked users as they are generated, and

- the RNG algorithm and any procedure using that algorithm to generate that “random” result will remain stable as long as the relevant feature is still in use by the application. (Not using seeding allows either to be changed or improved without affecting the application’s functionality.)

The post The two kinds of random number generators most applications need appeared first on Blog for The Ultimate Pokemon Center.

]]>The post Review my arbitrary-precision number library appeared first on Blog for The Ultimate Pokemon Center.

]]>I’ve written a library for implementing decimal, binary, and rational numbers in arbitrary precision (which is in C# and in Java). I’d like review on any aspect of the code, particularly the API, as well as the following issues:

* Which decimal formats should EDecimal (the arbitrary-precision decimal class) read and write to, besides strings which it already supports?

* What should be the string format for rational numbers?

The post Review my arbitrary-precision number library appeared first on Blog for The Ultimate Pokemon Center.

]]>The post Public Domain HTML 3D Library: Tumblr Mirror appeared first on Blog for The Ultimate Pokemon Center.

]]>The post Public Domain HTML 3D Library: Tumblr Mirror appeared first on Blog for The Ultimate Pokemon Center.

]]>The post I’ve worked on LESS stylesheets lately. appeared first on Blog for The Ultimate Pokemon Center.

]]>The post I’ve worked on LESS stylesheets lately. appeared first on Blog for The Ultimate Pokemon Center.

]]>The post Public Domain HTML 3D Library (WebGL): Version 1.5 Released appeared first on Blog for The Ultimate Pokemon Center.

]]>Version 1.5:

– Add support for specular maps and normal maps, including

in the JSON mesh format and MTL material format.

– To support normal maps, extra methods for bitangents and

tangents were added to the Mesh class.

– Added six new demos and modified several others

– Support 24-bit color versions of TGA files

– Scene3D now does frustum culling of its shapes

– Fixed vertex normal calculation in the recalcNormals()

method of the Mesh class.

– Allow two-element arrays to be passed to the Mesh class’s

texCoord3() method.

– Add getBoundingBox method to the Mesh class.

– Add the setVisible method to the Shape and ShapeGroup

classes.

– Allow reading OBJ files with negative reference numbers

– Add path.js (2D graphics paths) to extras

– Added an “axis” parameter to the SurfaceOfRevolution

constructor and fromFunction method

– Add vec3negate, vec3negateInPlace, vec3mul, and plane

and frustum methods to the GLMath class

– Deprecate the practice of setting shape materials directly

to textures (calling the Shape#setMaterial method with a

Texture object rather than a Material object).

– Deprecate certain properties of Transform that shouldn’t

be exposed as a public API and add corresponding methods

for those properties

– Fix getPromiseResults

– Documentation added in many places

– “meshexamples.md” demo added and other demos edited

or rearranged

– Other changes and fixes

The post Public Domain HTML 3D Library (WebGL): Version 1.5 Released appeared first on Blog for The Ultimate Pokemon Center.

]]>The post Public Domain HTML 3D Library (WebGL): Version 1.4 Released appeared first on Blog for The Ultimate Pokemon Center.

]]>Version 1.4:

Fixed camera.js issues (thanks to the user “the-sz” on GitHub)

Added an extras folder with the following new scripts:

A CurveTube class for creating tubes from 3D curves

A parametric evaluator for surfaces of revolution and 3 kinds of curves (evaluators.js)

A frame counter (moved from the demos)

A JSON converter and loader for 3D meshes (meshjson.js)

Made objmtl.js compatible with more MTL files

Math.sin/Math.cos pairs were replaced with optimized versions throughout the code

Add mat4transformVec3 method to GLMath

Add BSplineCurve class

Deprecate vertexBezier, normalBezier, colorBezier, and texCoordBezier methods of CurveEval and SurfaceEval

Optimize SurfaceEval’s evalSurface method when generating triangles

Add primitiveCount and enumPrimitives methods to the Mesh class

Add setMaterial and removeShape methods to the ShapeGroup class

Default shader program now uses modelViewMatrix instead of separate world and view uniforms

FIx JSON issues in GLUtil.loadFileFromUrl method

Many new demos added

Add graphics filters tutorial and expanded several other tutorials

The post Public Domain HTML 3D Library (WebGL): Version 1.4 Released appeared first on Blog for The Ultimate Pokemon Center.

]]>The post New demos needed for my HTML 3D/Library appeared first on Blog for The Ultimate Pokemon Center.

]]>I would like people to suggest or make new demos to add to the list. This is also a very good opportunity for you to test the library and see if everything works. If you make a new demo, you must release it to the Public Domain before I can add it.

Go to the issues page to comment on this, rather than here.

The post New demos needed for my HTML 3D/Library appeared first on Blog for The Ultimate Pokemon Center.

]]>The post Video: Demo Web page of parametric surfaces used in my Public Domain HTML 3D Library appeared first on Blog for The Ultimate Pokemon Center.

]]>The post The “Camera” and the Projection and View Transforms appeared first on Blog for The Ultimate Pokemon Center.

]]>This article describes projection and view transforms commonly used in 3D graphics,

especially when using my HTML 3D Library.

**Download the latest version of the library at the HTML 3D Library’s Releases page.**

The `Scene3D`

class of the HTML 3D Library has a concept of a "projection transform" and a "view transform". If we use the concept of a "camera", the projection is like setting the camera’s focus and lens, and the view transform is like setting its position and orientation. `Scene3D`

has methods for setting all these attributes of this abstract "camera". Two of them are `setPerspective()`

and `setLookAt()`

, which are shown in the example below.

```
// Set the perspective view. Camera has a 45-degree field of view
// and will see objects from 0.1 to 100 units away.
scene.setPerspective(45,scene.getClientAspect(),0.1,100);
// Move the camera back 40 units.
scene.setLookAt([0,0,40]);
// Move the camera back 30 units instead, and turn it so it
// points at (0, 2, 0), that is, up 2 units.
scene.setLookAt([0,0,30], [0,2,0]);
```

Here is an overview of transformations used in the graphics system and

the HTML 3D library.

- A
*world matrix*transforms an object’s own coordinates to*world space*,

the coordinate system shared by every object in the scene. The world matrix

is not discussed in this article. - A
*view matrix*transforms coordinates in world space to*camera space*or*eye space*. - A
*projection matrix*transforms coordinates in camera space to*clip space*. - Additionally, the graphics pipeline (outside the HTML 3D library) converts the

clip space coordinates to*normalized device coordinates*, then*screen space*

coordinates when drawing objects on the screen.

A *projection matrix* transforms coordinates in camera space to *clip space*.

Two commonly used projections in 3D graphics are the perspective projection and

orthographic projection, described below.

A perspective projection gives the 3D scene a sense of depth. In this projection, closer objects

look bigger than more distant objects with the same size.

The 3D scene is contained in a so-called *view volume*, and only objects contained in the view volume

will be visible. The dark yellow lines above show what a perspective view volume looks like. The red spheres

would not be visible under this view volume.

The view volume is bounded on all six sides by six *clipping planes*:

- The near and far clipping planes are placed a certain distance from the camera. For example, if

the near clipping plane is 3 units away and the far clipping plane is 5 units away, the view volume

will hold only objects between 3 and 5 units from the camera. (Strictly speaking, a near clipping

plane is not necessary, but practically speaking it is, in order to make the math work out correctly.) - The left, right, top, and bottom clipping planes form the other four sides of the volume.

Note further that:

- The angle separating the top and bottom clipping planes is the projection’s
*field of view*. This is an angle

similar to the aperture of a camera. The greater the vertical field of view, the greater

the vertical visibility range. - In a perspective projection, the near clipping plane segment bounding the view volume is smaller than

the far clipping plane segment. This is because the four other clipping planes are not parallel and extend

from the eye position.

The perspective projection converts 3D coordinates to 4-element vectors in the form (X, Y, Z, W), also

known as *clip coordinates*. Since the graphics system (outside the HTML 3D library) only deals with

3D points, it divides the X, Y, and Z components by the W component to get the 3D point’s *normalized
device coordinates* and achieve the perspective effect.

The `Scene3D`

class’s `setPerspective()`

and `setFrustum()`

methods define a perspective projection.

`scene3d.setPerspective(fov, aspect, near, far)`

This method calculates the appropriate clipping planes given a field of view and an aspect ratio,

and sets the scene’s projection matrix accordingly.

`fov`

– Vertical field of view, in degrees.`aspect`

– Aspect ratio of the scene. You should usually use`scene3d.getClientAspect()`

.`near`

,`far`

– Distance from the camera to the near and far clipping planes.

`scene3d.setFrustum(left, right, bottom, top, near, far)`

This method sets the scene’s projection matrix based on the location of the six clipping planes that

bound the view volume. Their positions are chosen so that the result is a perspective projection.

`left`

,`right`

,`bottom`

,`top`

– Location of the left, right, bottom, and top clipping planes in terms

of where they meet the near clipping plane.`near`

,`far`

– Distance from the camera to the near and far clipping planes.

- perspective.html – Demonstrates a perspective projection.

An orthographic projection is one in which the left and right clipping planes are parallel to each other,

and the top and bottom clipping planes are parallel to each other. This results in the near and far clipping

planes having the same size, unlike in a perspective projection, and

objects with the same size not varying in size with their depth.

The `Scene3D`

class’s `setOrtho()`

and `setOrthoAspect()`

methods

define an orthographic projection.

`scene3d.setOrtho(left, right, bottom, top, near, far)`

This method calculates an orthographic projection.

`left`

– Leftmost coordinate of the 3D view.`right`

– Rightmost coordinate of the 3D view.`bottom`

– Topmost coordinate of the 3D view.`top`

– Bottommost coordinate of the 3D view.`near`

,`far`

– Distance from the camera to the near and far clipping planes. Either value

can be negative.

`scene3d.setOrthoAspect(left, right, bottom, top, near, far, aspect)`

This method calculates an orthographic projection such that the resulting view isn’t stretched

or squished in case the view volume’s aspect ratio and the scene’s aspect ratio are different.

`left`

,`right`

,`bottom`

,`top`

,`near`

,`far`

– Same as in`setOrtho`

.`aspect`

– Aspect ratio of the viewport. May be omitted, in which case the scene’s

aspect ratio (`scene.getClientAspect()`

) is used.

There are other kinds of possible projections, such as oblique projections

or isometric projections. For these

and other projections, you can specify a custom projection matrix to the 3D scene using the

`setProjectionMatrix()`

method.

`scene3d.setProjectionMatrix(matrix)`

This method allows you to set the projection matrix to an arbitrary 4×4 matrix.

`matrix`

– The 4×4 matrix to use.

The view matrix transforms *world space* coordinates, shared by every object in a scene, to *camera space*

coordinates, in which the camera is located at the center of the coordinate system: (0, 0, 0). A view matrix essentially rotates the camera and moves it to a given position in world space. Specifically:

- The camera is rotated to point at a certain object or location on the scene. This is represented by

the`lookingAt`

parameter in the`setLookAt()`

method, below. - The camera is placed somewhere on the scene. This is represented by

the`eye`

parameter in the`setLookAt()`

method. It also represents the "eye position" in the perspective

projection, above. - The camera rolls itself, possibly turning it sideways or upside down. This is represented by

the`up`

parameter in the`setLookAt()`

method. Turning the camera upside down, for example, will swap

the placement of the top and bottom clipping planes, thus inverting the view of the scene.

The `setLookAt()`

and `setViewMatrix()`

methods are described below.

`scene3d.setLookAt(eye, lookingAt, up)`

This method allows you to set a view matrix based on the camera’s position and view.

`eye`

– Array of three elements (X, Y, Z) giving the position of the camera in world space.`lookingAt`

– Array of three elements (X, Y, Z) giving the position the camera is looking at in world space.

This is optional. The default is [0, 0, 0].`up`

– Array of three elements (X, Y, Z) giving the vector from the center of the camera to the top.

This is optional. The default is [0, 1, 0].

`scene3d.setViewMatrix(matrix)`

This method allows you to set the view matrix to an arbitrary 4×4 matrix.

`matrix`

– The 4×4 matrix to use.

The post The “Camera” and the Projection and View Transforms appeared first on Blog for The Ultimate Pokemon Center.

]]>The post Pokedex Data Pages Added appeared first on Blog for The Ultimate Pokemon Center.

]]>http://upokecenter.dreamhosters.com/articles/pokedex/

The post Pokedex Data Pages Added appeared first on Blog for The Ultimate Pokemon Center.

]]>