Selected Courses on Digital Art-UOWM

23 Ιουνίου 2020

coding-

Filed under: NOTES ON CODE — Ετικέτες: — admin @ 16:58

 

 
 
 
 
 
 
 
 

List of programming languages

From Wikipedia, the free encyclopedia
 
 
The aim of this list of programming languages is to include all notable programming languages in existence, both those in current use and historical ones, in alphabetical order, except for dialects of BASIC and esoteric programming languages.
Note: Dialects of BASIC have been moved to the separate List of BASIC dialects.
Note: This page does not list esoteric programming languages.

[edit]A

[edit]B

[edit]C

[edit]D

  • D
  • DASL (Datapoint’s Advanced Systems Language)
  • DASL (Distributed Application Specification Language)
  • Dart
  • DataFlex

[edit]E

[edit]F

[edit]G

[edit]H

[edit]I

[edit]J

[edit]K

[edit]L

[edit]M

[edit]N

[edit]O

[edit]P

[edit]Q

[edit]R

[edit]S

[edit]T

[edit]U

[edit]V

 

[edit]W

[edit]X

[edit]Y

 

[edit]Z

 

 
 
 
 
 
 
 
Computer programmers are those who write computer software. Their jobs usually involve:

[edit]

Digital Aesthetics: Introduction

Filed under: NOTES ON MEDIA ARTS,ΚΕΙΜΕΝΑ — admin @ 16:49
 
 
Digital Aesthetics: Introduction
Claudia Giannetti
The early twentieth century saw the formation in various fields of new theoretical approaches sharing a skeptical attitude towards the fundamental certainties that had profoundly influenced occidental culture and science. Towards the mid-twentieth century concepts like truth, reality, reason and knowledge became central in an intensive contest between rationalism and relativism. In the course of this debate, several theories were dissociated from the self-referential character of their scientific disciplines and increasingly placed in correlation with other fields. Examples of metadisciplinary models include the cybernetic analysis of message transmission and man-machine communication or, more recently, postmodernist philosophy and its notion of ‹contaminated,› ‹weak› thinking. [1] This relativism manifested itself in various aspects of art: as an essential component in the process of producing experimental art from the first avantgarde movements onward; in the radical transformation of the forms of art reception; in the tendency to interconnect and establish interchange among various art genres (discernible in interventionist and interdisciplinary works or ‹mixed media›); and finally in the intensified exchange among art, science, and technology. Artistic practice appropriated new media—initially photography and film, later video and computer—and new communication systems—post and telephone, followed by television and Internet. Under this premise, and above all from the 1960s onward, a gradual shift set in away from academic, orthodox positions attempting to confine art to traditional techniques, and aesthetics to ontological foundations.
However, the profound transformations resulting from these new approaches did not invariably meet with understanding, let alone acceptance, from artists. If one further takes into consideration the recently re-ignited controversy about the long-predicted crises of art and philosophical aesthetics, as well the widespread discourse among postmodernist writers which was linked to tendencies in technological and academic theory, then everything does in fact seem to point toward a disintegration of art and aesthetics. Yet a large part of such polemics can be attributed to the fact that aesthetic theory and artistic practice have gone separate ways. Artists’ increasing use of technology is bringing to light a far-reaching and on-going discrepancy between artistic perception, art theory, and aesthetics, which are seen to be notably diverging instead of developing synchronously and congruently. This gulf between theoretical «corpus» and artistic practice culminates in a paradox that without doubt leads to the often proclaimed end of art.
Nevertheless the conviction remains that certain symptoms of transition cannot be immediately equated with the radical disintegration of the fields involved. It is rather the case that new intellectual approaches and modes of experiencing must be found in order to enable the analysis and assimilation—as opposed to rejection—of the contemporary phenomena. One access route to these new forms is shown by the theory and practice of media art, and of interactive media art especially, whose renewing concepts are discernible in the fact that aesthetic the+ory is no longer focussed exclusively on the art object itself, but on its process, on system and contexts, on the broad linkage of different disciplines, and on reformulating the roles of the maker and the viewer of a work of art.
The complex process of transformation undergone by art and aesthetics, as well as the closely intermeshed interdisciplinary relationships, can be understood only by investigating those phenomena and theories which have so far driven forward the syntopy [2] of art, science, and technology, and in the future will continue to do so. It is not sufficient to describe the current state of art by concentrating on its epicenter; instead one must expand the horizon of consideration to adjacent fields and trace the historical developments in which corresponding changes and contemporary phenomena can be discerned. One aim of this hypertext monograph is to work out an aesthetic concept inherently formed by the context and creative experience of interactivity-based works, as well as their presentation and reception. The intention is to show potential paths towards a renewal of aesthetic discourses: paths already smoothed by those pioneers and artists whose tracks this essay follows. In this way various concepts of science, technology, and art are linked with a view to revising the notions of art, aesthetics, and spectator.
Without a doubt the artistic use of new technologies and the specific current forms of interlocking science and art lead to diverse formulations of questions—of practical and formal, as well as conceptual and philosophical nature—to which only future developments will deliver an answer. The «Aesthetics of the Digital» addresses several of these principle questions. Some contain possible answers, others lead to new questions that open up space for further considerations.
Translation: Tom Morrison
ART, SCIENCE, AND TECHNOLOGY
Claudia Giannetti
 
Art – science – art
Deliberations on the connection between art and science have various points of departure. The most general considerations are limited to the assumption of a parallel development. In his writings published in 1970, Werner Heisenberg, who along with Max Planck counts as a founding father of quantum theory, stated that the tendencies towards abstraction in the sciences were comparable with those in the field of art. According to Heisenberg, new artistic and scientific forms can result only from new content, but the converse does not apply. To renew art or to revolutionize science, he wrote, meant to create new content and concepts, and not just new forms. [1]
A question more complex than that of parallels between art and science is the extent to which art influences the sciences. According to Peter Weibel, this question can be answered only methodologically, that is by applying a comparison which views art and science as methods. While science, says Weibel, is distinctly methodological in character, art is generally not regarded as a method: «This is our first claim: art and science can only reasonably be compared if we
accept that both are methods. This does not mean that we declare that both have the same methods. We only want to declare that both have a methodological approach, even if their methods are or can be different.»[2]
Accordingly it would be permissible to view art and science as convergent in the methodological sense. As Weibel sees it, science is influenced by art in regard to its methods, but not by its products and references: «Because any time science develops the tendency for its methods to become too authoritarian, too dogmatic, science turns to art and to the methodology of art which is plurality of methods.» [3] The objective nature exists neither in the framework of the sciences nor in culture regardless of social construction, «art and science meet and converge in the method of social construction.» [4]
This position finds its most radical expression in the science-theoretical contributions of Paul Feyerabend. As a critic of scientific rationalism, he develops new interpretations and connections among the arts and sciences. He is of the opinion that artists and scientists developing a style or theory frequently pursue a secondary intention, namely that of representing ‹the› truth or ‹the› reality. However, artistic styles are closely connected with styles of thought.
That which a specific form of thought understands concepts such as truth or reality to mean is what this form of thought asserts as truth. When one decides in favor of a style, a reality, or a form of truth, then one always chooses a human-made construction. In other words, Feyerabend negates the possibility of absolute rationality and logic in regard to that which is created by the human mind. He asserts that this relativist, and in a certain sense irrational, factor inherent in every branch of science places science in the proximity of art. According to Feyerabend, the sciences are not an institution of objective truth, but are arts along the lines of a progressive understanding of art. [5]
Feyerabend’s line of argument reflects the skepticism that deeply influenced occidental culture and science well into the twentieth century. The aforementioned questions of truth, reality and reason are central components of the contest between rationalism and relativism affecting art no less directly
than science. If the nature of science were to be considered a research method under the premises of reality, plausibility, and dialectics, then whoever attempted to identify these three principles by strictly observing the complexity of the objects would, according to the Spanish scientist Jorge Wagensberg, reach the conclusion that the object resisted the method. The only manner of proceeding would be to «soften up» the method, with the result that «science is transformed into ideology.» «At its core ideology means not research, but faith. It follows from this consideration that one must stop with ideology all the holes which science has itself failed to stop. […] If the knowledge towards which we aspire is ruled not by laws but by world-views, then it would seem expedient to take our leave of scientific methods, and perhaps even adopt principles radically opposed to the latter. Precisely that is the case in art, in a kind of knowledge whose creators have not the least interest in distancing themselves from their creation.»[6]
Of particular relevance to the understanding of a new interpenetration of art and science is the generative nature of either area, which brings forth words or world-views of its own. For that reason, «the worlds of art and science are ideologically no longer opposites,» as Ilya Prigogine states, «the variety of the significates and the basic opacity of the world are reflected in new languages and new formalisms.» [7]
The origins of information theory
The technological revolution received its fundamental impetus from the first industrial revolution in the nineteenth century. By starting up a process of mechanization, the industrial revolution triggered the phenomenon of crises of control. [8] The mounting production levels resulting from mechanization led to the need for control systems to accelerate the flow of information. Researchers began to seek solutions in feedback techniques, automatic control systems, and information processing.
Under the title «On Governors» in 1868, Clerk Maxwell presented the first theoretical study towards an analysis of control and feedback mechanisms, so ushering in the radical transformation in automatic control engineering. By the late nineteenth century, a series of developments and technical innovations were
underway that in the 1940s would serve as the basis of a new theory, namely cybernetics. [9]
The control revolution produced not only feedback techniques and a new hierarchization of media, but also revolutionized the cultural reproduction forms of society. [10] This included areas like communications and art, since the technologies exercised a direct influence on the forms of sociocultural (re)production.
Until then, nevertheless, the themes associated with control mechanisms and automation were discussed in connection with only one common parameter, namely energy. As the basic concept of Newtonian mechanics, energy retained the same position in the natural sciences and in research fields like acoustics, electrical science, and optics. The invariant of ‹mass› similarly occupied a central position in physics. However, as production techniques continued to be improved, so the relationship of human and machine began to change likewise, leading to the emergence of questions about new terms and theories able to make this communication process between biological and technological systems the object of targeted research.
The constitution of two new disciplines: cybernetics and artificial intelligence
That «society can only be understood through a study of the messages and the communication facilities which belong to it; and that in the future development of these messages and communication facilities, messages between man and machines, between machines and man, and between machine and machine, are destined to play an ever increasing part,» [11] was the key idea of the American mathematician Norbert Wiener (1894–1964), which he elaborated in his book «The Human Use of Human Beings. Cybernetics and Society,» published in 1950 after a first technical study «Cybernetic, or Control and Communication in the Animal and in the Machine» of 1948. In 1950 likewise, the British mathematician Alan Turing (1912–1954) raised the question of the feasibility of logical thought by machines. In his essay «Computing Machinery and Intelligence,» published in volume 59 of the philosophical journal «Mind,» Turing proceeds from the basic question with which his text begins: «Can machines think?»
Until the mid-twentieth century no more than a few researchers working in isolation were concerned with subjects such as communication between dissimilar systems (for instance, biological and technical systems), or with the feasibility of technically designing thought machines. In addition to Wiener and Turing their ranks included Charles Babbage, Claude Shannon, Warren Weaver and Hermann Schmidt. However, from the 1950s on these subjects rapidly became two fields of basic research: cybernetics and Artificial Intelligence. [12] The two aforementioned texts triggered a flood of publications containing speculation and analysis on these subjects. In the first three years after 1950 alone, more than a thousand essays published dealt with intelligence and with communication with and between machines. Yet, when Turing published his essay there existed no more than four digital computers worldwide (Mark I and EDSAC in England, ENIAC and BINAC in the USA). [13]Although Turing’s theorem—everything the human mind can do in the form of an algorithm can also be carried out by a Universal Turing Machine—was based on models so far investigated only as a hypothetical experiment, several researchers were inspired to empirically confirm or disprove it by building machines.
Communication
The approach of cybernetics—a name derived from the Greek term ‹kybernetes› (steersman)—consists in transferring the theory of control and message transmission, whether in the machine or in a living being, to the fields of communication and machine control. The objective is to investigate the relationships between animal and machine, and in the case of the machine the specific mode of its behavior, as a characteristic of the performance to be expected. [14]
On the basis of the observation of certain analogies between machines and living organisms, the mathematician asserts that no reason actually exists not to make a machine resemble a human being, since one and the other develop tendencies toward decreasing entropy, meaning that both are examples of local anti-entropic phenomena.
Turing likewise conceded priority to the subject of communication. His famous experiment—the imitation
game, as he called it, also known as the Church-Turing thesis or Turing Theorem—for verifying the intelligence of a computer was concerned less with the actual construction of such a machine than with simulating with machines the human capability of communication. Turing was here acting in line with a tradition of measuring the faculty of thought by the ability to use human language. Descartes had already presented the logically semantic usage of language as a criterion for identifying thinking beings. For a long time, the mastery of semantics would remain a basic problem of Artificial Intelligence.
Information
In contrast to that tradition Wiener’s cybernetics sought operational ways of developing a specific language that would enable communication between dissimilar systems, and aimed to adapt semantics to specific goals in the process. Viewed from this perspective, Wiener’s theory replaced the notion of energy with that of information as the elementary parameter of communication, and thus postulated the definition of this new invariant for cybernetic science as a whole, which is a basic prerequisite for understanding the range of the cybernetic approach.
Unlike Newton’s mechanics, which operates with closed systems, information is applied to open systems. In this way it must be seen as a key enabling linkage and communication between dissimilar systems, and between the latter and the external world. ‹Mass› and ‹energy› are directly related to matter in the natural sciences, whereas ‹information› is not conveyed by any substance, but is based on variable properties: Information can be reproduced (duplicated or copied), destroyed (erased), or reiterated (repeated). «Information is a name for the content of what is exchanged with the outer world as we adjust to it, and make our adjustment felt upon it. The process of receiving and of using information is the process of our adjusting to the contingencies of the outer environment, and of our living effectively within that environment.» [15] To this extent, not the possible quantity of circulating information is crucial to the effectiveness of communication, but the degree to which this information is integrated into communication. Along the lines of cybernetics, then,
significant information is not the entirety of all information transmitted, but that information which passes through the ‹filters.›
Feedback
In the field of information and communication Wiener devoted particular attention to the question of automatons and the development of feedback models. His core interest lay in investigating machines capable of evaluating input and of integrating the stored experience into the further feedback loops. In this respect, feedback is a method of making systems self-regulating, by which the results of preceding activities are re-integrated in the procedural sequence and thus enable runtime corrections to be made permanently. To this end, machines must be capable of learning processes.
Although his approaches and conclusions are very different from those of Wiener, Turing in his essay likewise clearly indicated the necessity of developing systems capable of learning. Devoted to the subject of learning machines, the essay takes as its starting point the principle that «education can take place provided beyond that for the development of interactive, digital systems. This communication channel permits bi-directional information exchange, and therefore also learning processes. On the basis of this method, Turing repudiated the thesis set up by Ada Lovelace in 1842. [22]Using investigations made with the ‹analytic machine› of Charles Babbage, Lady Lovelace had claimed that a machine can do only that which it is instructed to do, and therefore is never capable of producing anything truly new. [23] Turing contradicted this thesis with the question «who can be certain that ‹original work› that he has done was not simply the growth of the seed planted in him by teaching, or the effect of following well-known general principles?» [24] He further pointed out that the machine must be to a certain degree ‹undisciplined› or random-controlled in order that its behavior can be considered intelligent. [25]
Precisely this element of chance was what lent the machine ‹creative› ability, namely the ability to solve problems. Although discrete machines that could pass the Turing Test are feasible, they would succeed not because they were replicas of the human brain but because they would have been programmed accordingly. As Turing himself realized, the basic problem lies in the area of programming. In fact, it was not necessary to wait the fifty years assumed by Turing in order to program «computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than a 70 per cent chance of making the right identification after five minutes of questioning.» [26] The programs have been written, and have passed the Turing Test with a high degree of interactivity. One might therefore conclude that the problem is not solely confined to investigating the possibilities of Artificial Intelligence. [27]
Viewed from the contemporary perspective, cybernetics and AI cannot be reduced to solely scientific, economic, or technical interest. Since these theories belong to a socio-technical field in which communication structures, world-views and people-views are formed and transformed, they are concerned with philosophic issues of perception, cognition, language, ethics, and aesthetics. If  information technology is basically working towards the automation of mental processes, then it directly or indirectly reaches into disciplines concerned with human cognition or creativity.
Translation by Tom Morrison

GLOSSERY OF TERMS

Filed under: NOTES ON DIGITAL IMAGE — Ετικέτες: — admin @ 16:46
GLOSSERY OF TERMS
1-bit color -The lowest number of colors per pixel in which a graphics file can be stored. In 1-bit color, each pixel is either black or white.
8-bit color/grayscale -In 8-bit color, each pixel is has eight bits assigned to it, providing 256 colors or shades of gray, as in a grayscale image.
24-bit color -In 24-bit color, each pixel has 24 bits assigned to it, representing 16.7 million colors. 8 bits – or one byte – is assigned to each of the red, green,
and blue components of a pixel.
32-bit color – A display resolution setting that is often referred to as true color and offers a color palette of over 4 billion colors or 2^(3)^(2).
Additive Colors – Red, Green, and Blue are referred to as additive colors. Red+Green+Blue=White.
Algorithm -The specific process in a computer program used to solve a particular problem.
Aliasing -An effect caused by sampling an image (or signal) at too low a rate. It makes rapid change (high texture) areas of an image appear as a
slow change in the sample image. Once Aliasing occurs, there is no way to accurately reproduce the original image from the sampled image.
Analog -Analog transmitted data can be represented electronically by a continuous wave form signal. Examples of analog items are traditional photographed
images and phonograph albums.
Anti-Aliasing – The process of reducing stair-stepping by smoothing edges where individual pixels are visible.
Application -A computer software program designed to meet a specific need.
Binary -A coding or counting system with only two symbols or conditions (off/on, zero/one, mark/space, high/low). The binary system is the basis
for storing data in computers.
Bit – A binary digit, a fundamental digital quantity representing either 1 or 0 (on or off).
Bitmap(BMP) -An image made up of dots, or pixels. Refers to a raster image, in which the image consists of rows or pixels rather than vector coordinates.
Channel – One piece of information stored with an image. True color images, for instance, have three channels-red, green and blue.
Chroma – The color of an image element (pixel). Chroma is made up of saturation + hue values, but separate from the luminance value.
CMYK (Cyan, Magenta, Yellow, Black) -One of several color encoding system used by printers for combining primary colors to produce a full-color image. In
CMYK, colors are expressed by the “subtractive primaries” (cyan, magenta, yellow) and black. Black is called “K” or keyline since black, keylined text appears on
this layer.
Compression – The reduction of data to reduce file size for storage. Compression can be “lossy” (such as JPEG) or “lossless” (such as TIFF LZW). Greater
reduction is possible with lossy compression than with lossless schemes.
Continuous Tone – An image where brightness appears consistent and uninterrupted. Each pixel in a continuous tone image file uses at least one byte each for
its red, green, and blue values. This permits 256 density levels per color or more than 16 million mixture colors.
Digital vs. analog information – Digital data are represented by discrete values. Analog information is represented by ranges of values, and is therefore less
precise. For example, you get clearer sound from an audio CD (which is digital) than from an audiocassette (which is analog). Computers use digital data.
Desktop Publishing – Describes the digital process of combining text with visuals and graphics to create brochures, newsletters, logos, electronic slides and
other published work with a computer.
Digital – A system or device in which information is stored or manipulated by on/off impulses, so that each piece of information has an exact or repeatable
value (code).
Digitization – The process of converting analog information into digital format for use by a computer.
Dithering – A method for simulating many colors or shades of gray with only a few. A limited number of same-colored pixels located close together are seen as
a new color.
Download – The transfer of files or other information from one piece of computer equipment to another.
DPI (Dots Per Inch) -The measurement of resolution of a printer or video monitor based on dot density. For example, most laser printers have a resolution of
300 dpi, most monitors 72 dpi, and most PostScript imagesetters 1200 to 2450 dpi. The measurement can also relate to pixels in an input file, or line screen
dots (halftone screen) in a pre-press output film.
Driver – software utility designed to tell a computer how to operate an external device. For instance, to operate a printer or a scanner, a computer
will need a specific driver.
Firewire – A very fast external bus that supports data transfer rates of up to 400 MBPS. Firewire was developed by Apple and falls under the IEEE
1394 standard. Other companies follow the IEEE 1394 but have names such as Lynx and I-link.
FTP (File Transfer Protocol – An abbreviation for File Transfer Protocol and is a universal format for transferring files on the Internet.
GIF File Format – Stands for Graphic Interchange Format, a raster oriented graphic file format developed by CompuServe to allow exchange of
image files across multiple platforms.
Gigabyte (GB) -A measure of computer memory or disk space consisting of about one thousand million bytes (a thousand megabytes). The actual value is
1,073,741,824 bytes (1024 megabytes).
Gray Scale – A term used to describe an image containing shades of gray as well as black and white.
Halftone Image – An image reproduced through a special screen made up of dots of various sizes to simulate shades of gray in a photograph. Typically used for
newspaper or magazine reproduction of images.

Hue -A term used to describe the entire range of colors of the spectrum; hue is the component that determines just what color you are using. In
gradients, when you use a color model in which hue is a component, you can create rainbow effects.
Image Resolution – The number of pixels per unit length of image. For example, pixels per inch, pixels per millimeter, or pixels wide.
Import – The process of bringing data into a document from another computer, program, type of file format, or device.
Jazz Drive – A computer disk drive made by Iomega that enables users to save about 1000 megabytes or 1Gigabyte of information on their special
disks.
JPEG (Joint Photographic Experts Group) -A technique for compressing full-color bit-mapped graphics.
Kilobyte – An amount of computer memory, disk space, or document size consisting of approximately one thousand bytes. Actual value is 1024
bytes.
Lossless compression – Reduces the size of files by creating internal shorthand that rebuilds the data as it originally were before the compression.
Thus, it is said to be non-destructive to image data when used.
Lossy compression – A method of reducing image file size by throwing away unneeded data, causing a slight degradation of image quality. JPEG is a
lossy compression method.
Mask – A defined area used to limit the effect of image-editing operations to certain regions of the image. In an electronic imaging system, masks
are drawn manually (with a stylus or mouse) or created automatically–keyed to specific density levels or hue, saturation and luminance values in the
image. It is similar to photographic lith masking in an enlarger.
Megabyte (MB) – An amount of computer memory consisting of about one million bytes. The actual value is 1,048,576 bytes.
Moire – A visible pattern that occurs when one or more halftone screens are misregistered in a color image. Multimedia – This involves the combination of two
or more media into a single presentation. For example, combining video, audio, photos, graphics and/or animations into a presentation.
Network – A group of computers connected to communicate with each other, share resources and peripherals.
Palette – A thumbnail of all available colors to a computer or devices. The palette allows the user to chose which colors are available for the
computer to display. The more colors the larger the data and the more processing time required to display your images. If the system uses 24-bit
color, then over 16.7 million colors are included in the palette.
Pixel (PICture ELement) -The smallest element of a digitized image. Also, one of the tiny points of light that make up a picture on a computer screen.
PNG (Portable Network Graphics) – pronounced ping. A new standard that has been approved by the World Wide Web consortium to replace GIF because
GIF uses a patented data compression algorithm. PNG is completely patent and license-free.
PostScript – A page description language developed by Adobe Systems, Inc. to control precisely how and where shapes and type will appear on a page.
Software and hardware may be described as being PostScript compatible.
RAM – Random Access Memory. The most common type of computer memory; where the CPU stores software, programs, and data currently being used.
RAM is usually volatile memory, meaning that when the computer is turned off, crashes, or loses power, the contents of the memory are lost. A large amount
of RAM usually offers faster manipulation or faster background processing.
Raster – Raster images are made up of individual dots; each of which have a defined value that precisely identifies its specific color, size and place within the
image. (Also known as bitmapped images.)
Render – The final step of an image transformation or three-dimensional scene through which a new image is refreshed on the screen.
Resize – To alter the resolution or the horizontal or vertical size of an image. Resolution – The number of pixels per unit length of image. For example, pixels per
inch, pixels per millimeter, or pixels wide.
RGB – Short for Red, Green, and Blue; the primary colors used to simulate natural color on computer monitors and television sets. Saturation – The degree to
which a color is undiluted by white light. If a color is 100 percent saturated, it contains no white light. If a color has no saturation, it is a shade of gray.
Software – Written coded commands that tell the computer what tasks to perform. For example, Word, PhotoShop, Picture Easy, and PhotoDeluxe
are software programs
Subtractive colors – Transparent colors that can be combined to produce a full range of color. Subtractive colors subtract or absorb elements of
light to produce other colors.
TIFF (Tagged Image File Format) -The standard file format for high-resolution bit-mapped graphics. TIFF files have cross-platform compatibility.
TWAIN – Protocol for exchanging information between applications and devices such as scanners and digital cameras. TWAIN makes it possible for
digital cameras and software to “talk” with one another on PCs.
Unsharp Masking – A process by which the apparent detail of an image is increased; generally accomplished by the input scanner or through
computer manipulation.
USB (Universal Serial Bus) -The USB offers a simplified way to attach peripherals and have them be recognized by the computer. USB ports are about 10
times faster than a typical serial connection. These USB ports are usually located in easy to access locations on the computer.
Virtual Memory -Disk space on a hard drive that is identified as RAM through the operating system, or other software. Since hard drive memory is often less
expensive than additional RAM, it is an inexpensive way to get more memory and increase the operating speed of applications.
WYSIWYG – What You See Is What You Get. Refers to the ability to output data from the computer exactly as it appears on the screen.

Computational Intelligence

Filed under: ΚΕΙΜΕΝΑ — Ετικέτες: — admin @ 16:45
Computational Intelligence
Michael I. Jordan and Stuart Russell
There are two complementary views of artificial intelligence (AI): one as an engineer- ing discipline concerned with the creation of intelligent machines, the other as an empirical science concerned with the computational modeling of human intelligence. When the field was young, these two views were seldom distinguished. Since then, a substantial divide has opened up, with the former view dominating modern AI and the latter view characterizing much of modern cognitive science. For this reason, we have adopted the more neutral term “computational intelligence” as the title of this arti- cle—both communities are attacking the problem of understanding intelligence in computational terms.
It is our belief that the differences between the engineering models and the cogni- tively inspired models are small compared to the vast gulf in competence between these models and human levels of intelligence. For humans are, to a first approxima- tion, intelligent; they can perceive, act, learn, reason, and communicate successfully despite the enormous difficulty of these tasks. Indeed, we expect that as further progress is made in trying to emulate this success, the engineering and cognitive mod- els will become more similar. Already, the traditionally antagonistic “connectionist” and “symbolic” camps are finding common ground, particularly in their understand- ing of reasoning under uncertainty and learning. This sort of cross-fertilization was a central aspect of the early vision of cognitive science as an interdisciplinary enter- prise.
1 Machines and Cognition
The conceptual precursors of AI can be traced back many centuries. LOGIC, the formal theory of deductive reasoning, was studied in ancient Greece, as were ALGORITHMS for mathematical computations. In the late seventeenth century, Wilhelm Leibniz actually constructed simple “conceptual calculators,” but their representational and combinatorial powers were far too limited. In the nineteenth century, Charles Babbage designed (but did not build) a device capable of universal computation, and his collab- orator Ada Lovelace speculated that the machine might one day be programmed to play chess or compose music. Fundamental work by ALAN TURING in the 1930s for- malized the notion of universal computation; the famous CHURCHTURING THESIS pro- posed that all sufficiently powerful computing devices were essentially identical in the sense that any one device could emulate the operations of any other. From here it was a small step to the bold hypothesis that human cognition was a form of COMPUTATION in exactly this sense, and could therefore be emulated by computers.
By this time, neurophysiology had already established that the brain consisted largely of a vast interconnected network of NEURONS that used some form of electrical signalling mechanism. The first mathematical model relating computation and the brain appeared in a seminal paper entitled “A logical calculus of the ideas immanent in nervous activity,” by WARREN MCCULLOCH and WALTER PITTS (1943). The paper proposed an abstract model of neurons as linear threshold units—logical “gates” that output a signal if the weighted sum of their inputs exceeds a threshold value (see COMPUTING IN SINGLE NEURONS). It was shown that a network of such gates could repre- sent any logical function, and, with suitable delay components to implement memory, would be capable of universal computation. Together with HEBB’s model of learning in networks of neurons, this work can be seen as a precursor of modern NEURAL NETWORKS and connectionist cognitive modeling. Its stress on the representation of logi- cal concepts by neurons also provided impetus to the “logicist” view of AI.
The emergence of AI proper as a recognizable field required the availability of usable computers; this resulted from the wartime efforts led by Turing in Britain and by JOHN VON NEUMANN in the United States. It also required a banner to be raised;
lxxiv Computational Intelligence

page75image1192

this was done with relish by Turing’s (1950) paper “Computing Machinery and Intel- ligence,” wherein an operational definition for intelligence was proposed (the Turing test) and many future developments were sketched out.
One should not underestimate the level of controversy surrounding AI’s initial phase. The popular press was only too ready to ascribe intelligence to the new “elec- tronic super-brains,” but many academics refused to contemplate the idea of intelli- gent computers. In his 1950 paper, Turing went to great lengths to catalogue and refute many of their objections. Ironically, one objection already voiced by Kurt Gödel, and repeated up to the present day in various forms, rested on the ideas of incompleteness and undecidability in formal systems to which Turing himself had contributed (see GÖDELS THEOREMS and FORMAL SYSTEMS, PROPERTIES OF). Other objectors denied the possibility of CONSCIOUSNESS in computers, and with it the pos- sibility of intelligence. Turing explicitly sought to separate the two, focusing on the objective question of intelligent behavior while admitting that consciousness might remain a mystery—as indeed it has.
The next step in the emergence of AI was the formation of a research community; this was achieved at the 1956 Dartmouth meeting convened by John McCarthy. Per- haps the most advanced work presented at this meeting was that of ALLEN NEWELL and Herb Simon, whose program of research in symbolic cognitive modeling was one of the principal influences on cognitive psychology and information-processing psy- chology. Newell and Simon’s IPL languages were the first symbolic programming languages and among the first high-level languages of any kind. McCarthy’s LISP language, developed slightly later, soon became the standard programming language of the AI community and in many ways remains unsurpassed even today.
Contemporaneous developments in other fields also led to a dramatic increase in the precision and complexity of the models that could be proposed and analyzed. In linguistics, for example, work by Chomsky (1957) on formal grammars opened up new avenues for the mathematical modeling of mental structures. NORBERT WIENER developed the field of cybernetics (see CONTROL THEORY and MOTOR CONTROL) to provide mathematical tools for the analysis and synthesis of physical control systems. The theory of optimal control in particular has many parallels with the theory of ratio- nal agents (see below), but within this tradition no model of internal representation was ever developed.
As might be expected from so young a field with so broad a mandate that draws on so many traditions, the history of AI has been marked by substantial changes in fash- ion and opinion. Its early days might be described as the “Look, Ma, no hands!” era, when the emphasis was on showing a doubting world that computers could play chess, learn, see, and do all the other things thought to be impossible. A wide variety of methods was tried, ranging from general-purpose symbolic problem solvers to simple neural networks. By the late 1960s, a number of practical and theoretical setbacks had convinced most AI researchers that there would be no simple “magic bullet.” The gen- eral-purpose methods that had initially seemed so promising came to be called weak methods because their reliance on extensive combinatorial search and first-principles knowledge could not overcome the complexity barriers that were, by that time, seen as unavoidable. The 1970s saw the rise of an alternative approach based on the applica- tion of large amounts of domain-specific knowledge, expressed in forms that were close enough to the explicit solution as to require little additional computation. Ed Feigenbaum’s gnomic dictum, “Knowledge is power,” was the watchword of the boom in industrial and commercial application of expert systems in the early 1980s.
When the first generation of expert system technology turned out to be too fragile for widespread use, a so-called AI Winter set in—government funding of AI and pub- lic perception of its promise both withered in the late 1980s. At the same time, a revival of interest in neural network approaches led to the same kind of optimism as had characterized “traditional” AI in the early 1980s. Since that time, substantial progress has been made in a number of areas within AI, leading to renewed commer- cial interest in fields such as data mining (applied machine learning) and a new wave of expert system technology based on probabilistic inference. The 1990s may in fact come to be seen as the decade of probability. Besides expert systems, the so-called 

Filed under: Notes — admin @ 16:44

1) Video lectures – 15 hours of video in c. 10 minute blocks on:
      flat part recognition, deformable part recognition, range data
      and stereo data 3D part recognition, detecting & tracking
objects in video,
      and behaviour recognition. There are also about 8 hours of introductory
      image processing videos.
2) CVonline – organising about 2000 related topics in imaging & vision,
      including some elementary neurophysiology and psychophysics.
      Most content is in wikipedia now, but the index is independent.
3) CVonline supplements:
      list of online and hardcopy books
      list of datasets for research and student projects
      list of useful software packages
      list of over 300 different image analysis application areas
4) Online education resources of the Int. Assoc. for Pattern Recognition
5) HIPR2 – Image Processing Teaching Materials with JAVA
6) CVDICT: Dictionary of Computer Vision and Image Processing

See more details of these below .

Best wishes, Bob Fisher

================================================================

1) video lectures – 15 hours of video in c. 10 minute blocks.
    See: http://homepages.inf.ed.ac.uk/rbf/AVINVERTED/main_av.htm

    Including PDF slides, links to supplementary reading, a drill
question for each video
    The site contains a set of video lectures on a subset of computer
vision. It is
    intended for viewers who have an understanding of the nature of
images and some
    understanding of how they can be processed. The course is more like
    Computer Vision 102, introducing a range of standard and acccepted
    methods, rather than the latest research advances.

    Similarly, there are are about 8 hours of introductory image
processing lectures at:
      http://homepages.inf.ed.ac.uk/rbf/IVRINVERTED/main_ivr.html
    with similar resources

================================================================

2) CVonline is a free WWW-based set of introductions to topics in
computer vision.

     http://homepages.inf.ed.ac.uk/rbf/CVonline/

    Because of the improvements in the content available in Wikipedia,
    it is now possible to find content for more than 50% of CVonline’s
2000 topics.
    CVonline groups together the topics into a sensible topic
hierarchy, but tries
    to exploit the advancing quality and breadth of wikipedia’s content.

================================================================

3) CVonline has a variety of supplemental information useful to
students and researchers,
    namely lists of:

    online and hardcopy books:
http://homepages.inf.ed.ac.uk/rbf/CVonline/books.htm
    datasets for research and student projects:
http://homepages.inf.ed.ac.uk/rbf/CVonline/Imagedbase.htm
    useful software packages:
http://homepages.inf.ed.ac.uk/rbf/CVonline/SWEnvironments.htm
    list of over 300 different image analysis application areas:
http://homepages.inf.ed.ac.uk/rbf/CVonline/applic.htm

================================================================

4) The education resources of the Int. Assoc. for Pattern Recognition

    http://homepages.inf.ed.ac.uk/rbf/IAPR/

contain many links to Tutorials and Surveys, Explanations, Online Demos,
Datasets, Books, Code for:
   Symbolic pattern recognition, Statistical pattern recognition,
Machine learning,
   1D Signal pattern recognition and 2D Image analysis and computer vision.

================================================================

5) HIPR2: free WWW-based Image Processing Teaching Materials with JAVA

   http://homepages.inf.ed.ac.uk/rbf/HIPR2/

   HIPR2 is a free www-based set of tutorial materials for the 50 most commonly
   used image processing operators. It contains tutorial text, sample results
   and JAVA demonstrations of individual operators and collections.

================================================================

6) CVDICT: Dictionary of Computer Vision and Image Processing

   http://homepages.inf.ed.ac.uk/rbf/CVDICT/

   This are the free view terms A..G from the the first version of the
   Dictionary, published by John Wiley and Sons. (Note there there a second
   edition currently on sale).

14-10 _PROCESSING THE S

Filed under: NOTES ON INSTALLATIONS,NOTES ON INTERACTIVE ART — admin @ 16:41
https://www.youtube.com/watch?v=uN2DkxPz4kY?list=PL8A560DB61FF9F9E1

Introduction to Computation and Programming Using Python, revised and expanded edition

Filed under: NOTES ON CODE — admin @ 16:35

Overview

Video is an electronic medium, dependent on the transfer of electronic signals. Video signals are in constant movement, circulating between camera and monitor. This process of simultaneous production and reproduction makes video the most reflexive of media, distinct from both photography and film (in which the image or a sequence of images is central). Because it is processual and not bound to recording and the appearance of a “frame,” video shares properties with the computer. In this book, Yvonne Spielmann argues that video is not merely an intermediate stage between analog and digital but a medium in its own right. Video has metamorphosed from technology to medium, with a set of aesthetic languages that are specific to it, and current critical debates on new media still need to recognize this.
Spielmann considers video as “transformation imagery,” acknowledging the centrality in video of the transitions between images—and the fact that these transitions are explicitly reflected in new processes. After situating video in a genealogical model that demonstrates both its continuities and discontinuities with other media, Spielmann considers three strands of video praxis: documentary, experimental art, and experimental image-making (which is concerned primarily with signal processing). She then discusses selected works by such artists as Vito Acconci, Ulrike Rosenbach, Joan Jonas, Nam June Paik, Peter Campus, Dara Birnbaum, Nan Hoover, Lynn Hershman, Gary Hill, Steina and Woody Vasulka, Bill Seaman, and others. These works serve to demonstrate the spectrum of possibilities in video as a medium and point to connections with other forms of media. Finally, Spielmann discusses the potential of interactivity, complexity, and hybridization in the future of video as a medium.

About the Author

Yvonne Spielmann is Dean of the Faculty of Fine Arts at Lasalle College of the Arts in Singapore. She is the author of Video: The Reflexive Medium (MIT Press, 2007), which won the Lewis Mumford award in 2009.

Endorsements

“Available for the first time in translation, Yvonne Spielmann’s Video: The Reflexive Medium provides us with a keen parsing of the specificities of video as a medium. Tracing its emergent genealogy as a distinctly audiovisual medium, Spielmann provides a comprehensive catalog of video’s aesthetic evolution from its early intermedial accords with television and performance to its more recent interactions with computers and networked digital media. As the media-specific distinctions between cinematic, televisual, and computer-based media have been eroded beyond recognition, Video: The Reflexive Medium provides a much-needed account of video’s medial specificities and intermedial dependencies.”
Anne Friedberg, Professor and Chair of Critical Studies, School of Cinematic Arts, USC, and author of The Virtual Window: From Alberti to Microsoft
“Spielmann’s Video: The Reflexive Medium is a highly significant, well-researched, and discursive addition to the canon. It is illuminating on both the technological and aesthetical issues, as well as giving primary insights into the artist makers themselves.”
Stephen Partridge, Dean of Research, Duncan of Jordanstone College of Art & Design, University of Dundee

Awards

Winning entry, Professional Cover/Jacket Category, in the 2008 New England Book Show sponsored by Bookbuilders of Boston.
Winner, 2009 The Lewis Mumford Award for Outstanding Scholarship in the Ecology of Technics, given by the Media Ecology Association (MEA).

Έκθεση φωτογραφίας και βίντεο για τις «Ημέρες Ενέργειας»2

Filed under: Ημέρες Ενέργειας — admin @ 07:57
Δήμος Φλώρινας – Πρόγραμμα εκδηλώσεων για την «Ημέρα Ενέργειας»
Οι εκδηλώσεις που θα διοργανώσει ο Δήμος Φλώρινας για την «Ημέρα Ενέργειας» στο πλαίσιο του έργου «ENERGYNET» θα πραγματοποιηθούν στη Φλώρινα την Τετάρτη 17-6-2015 στην αίθουσα Πολλαπλών Χρήσεων του Δήμου. Το πρόγραμμα των εκδηλώσεων θα περιλαμβάνει:
  • Εσπερίδα με τρεις εισηγήσεις:
1) «ENERGYNET: μια πρωτοβουλία θεματικής δικτύωσης διασυνοριακών ΟΤΑ»
2) «Ανανεώσιμες Πηγές Ενέργειας – H Ενέργεια της Γης»
3) «Πρότυπα έργα αειφορικής ενέργειας των δήμων του δικτύου ENERGYNET»
Οι εισηγήσεις θα συνοδεύονται από πόστερ που θα αναφέρονται στην Αειφορική ενέργεια και το ENERGYNET.

  • Εικαστική έκθεση με έργα μαθητών του 2ου Γυμνασίου Φλώρινας, με τίτλο «Ας δράσουμε για την Ενέργεια». Θα γίνει Βράβευση του καλύτερου έργου και θα δοθούν διακρίσεις στα τρία πιο αξιόλογα.
  • Έκθεση βίντεο και φωτογραφίας από τα Εργαστήρια Πολυμέσων και Φωτογραφίας του Τμήματος Εικαστικών και Εφαρμοσμένων Τεχνών Φλώρινας του Πανεπιστημίου Δυτικής Μακεδονίας.
 

Βιωματικό Ψηφιακό Εικαστικό Εργαστήριο στους Ψαράδες Πρεσπών «Συμβίωση-Όρια» 2014

Filed under: ΟΡΙΑ-ΣΥΜΒΙΩΣΗ,Συμβίωση — admin @ 07:50

βλ. http://prespas.blogspot.gr
βλ. https://eetf.uowm.gr/oria-simviosi-ergastirio-stis-prespes/

22 Ιουνίου 2020

SEARCH ARTISTS

Filed under: ΚΑΛΛΙΤΕΧΝΕΣ-ARTISTS — admin @ 23:26

 1. David Rokeby, N’Cha(n)t, 2001
 2. Grahame Weinbren, Frames, 1999
 3. Char Davies, Ephomere, 1998
 4. Bill Viola. Going Forth By Day, 2002
 5. Ben Rubin and Mark Hansen, Listening Post, 2003
 6. Jenny Holzer. ARNO, 1996
 7. Irit Batsry, These Are Not My Images, Neither There Nor Here, 2000
 8. Anwar Kanwar, A Season Outside, 1997
 9. Lyn Hershman, Conceiving Ada, 1996
10. Laurie Anderson performing Stories from the Nerve Bible. 1995
11. Victoria Vesna with Jim Gimzewski, Zeroawavefunction, Nano Dreams and nightinares, 2002
12. Luc Courchesne. Landscape One. 1997

Figures

Mark Kostabi, Electric Family. 1998, Frontispiece
I.1. Georges Melies. Le Voyage dans Lune
I.2. National Aeronautics and Space Administration (NASA), Astronaut David Scott Plants American Flag on the Moon, July 26, 1971
I.3. Fritz Lang, Metropolis, 1926
I.4. Andy Warhol. Thirty Are Better than One, 1963
I.5. Paul Hosefros, Gauguin and His Flatterers, June 25. 1988

1
1.1. Abraham Bosse (1602-76), Perspective Drawing

1.2. Albrecht Darer, Untitl., 1538
1.3. Early camera obscure. from A. Kircher, Ars Magna Lucis et Umbrae. 1645
1.4. Camera obscure, circa seventeenth century
1.5. Camera lucida, circa eighteenth century
1.6. Jan Vermeer. Young Girl With A Flute, 1665
1.7. Theodore Maurisset, La Daguerreotypomanie, 1839
1.8. Eadweard Muybridge, Woman Kicking, 1887
1.9. Raoul Hausmann, Tatlin at Home. 1920
1.10. John Heartfield, Hurrah, die Butter ist All, (Hurrah, the Butter Is Gone!), 1935
1.11. Lumiere Brothers, frames from Un Train Arrive en Gare. 1896

2
2.1. Etienne-Jules Marey. Chronophotographe Geometrique, 1884
2.2. Giacomo Balla, Swifts: Paths of Movement + Dynamic Sequences, 1913
2.3. Marcel Ducharnp, Nude Descending a S.ircase, No. 21912
2.4. Vladimir Tatfin, Monument for the Third International. 1920
2.5. 211810 Moholy-Nagy, Light Space Modulator. 1923-30
2.6. Dziga Vertov, Man with a Movie Camera, 1929
2.7. Marcel Duchamp, The Large Glass or The Bride Stripp. Bare by Her Bachelors, Even, 1915-23 (replica: 1961)
2.8. Charlie Chaplin, Modern Times, 1936
2.9. James Rosenquist Working in Times Square, 1958
2.10. James Rosenquist. Love You with My Ford, 1961
2.11. Andy Warhol, Green Coca-Cola Bottles. 1962
2.12. Andy Warhol, Electric Chair, 1965
2.13. Roy Lichtenstein, Hopeless, 1963
2.14. Eduardo Paolozzi, Artificial Sun. 1965
2.15. Richard Hamilton, Kent State, 1970

3
3.1. Robert Rauschenberg, Signs, 1970
3.2. Keith Haring, Untitled, 1983
3.3. John Baldessari, I Will Not Make Any More Boring Art, 1971
3.4. Mark Tansey, Secret of the Sphinx, 1984
3.5. Jean Dupuy. Jean Tinguely, and Alexander Calder with Heart Bea. Dust, .69
3.6. Jean Dupuy. artist, and Ralph Martel, engineer, Heart Beats Dust 1968
3.7. Pepsi-Cola Pavilion. Osaka, 1970
3.8. Remy Charlip, Homage to Loie Fuller, March 8, 1970
3.9. Merce Cunningham, John Cage and Stan Van 081 8880. Variations V, 1965
3.10. Barbara Kruger, Untit/ed, 1982
3.11. Carolee Schneemann, Cycladic imprints, 1993
3.12. Adrian Piper, What it’s Like, What lt Is, #3, 1.4
3.13. Damien Hirst, Hymn, 2001
3.14. Stelarc, Amplifi. Body, Automat. Arm and Third Hand, 1992
3.15. Larry List, An Excerpt from the History of the World, 1990
3.16. John Craig Freeman, Rocky Fla. Billboards, 1994
3.17. Mierle Laderman Ukeles, Touch Sanitation Show, 1.4
3.18. Krzysztof Wodiczko. Projection on the Hirshhorn Museum. 1988
3.19. Christo and Jeanne Claude, Wrapped Reichstag. Berlin, 1995
3.20. Robert Wilson, Einstein on the Beach (final scene by the lightboard), 1986

4
4.1. Nam June Paik, Magnet TV, 1965
4.2. Nam June Paik, TV Buddha, 1974
4.3. Ulrike Rosenbach, Meine Macht Ist meine Ohnmacht (To Have No Power Is to Have Power),
1978
4.4. Bruce Nauman, Live Tap. Video Corridor, 1969-70
4.5. Dan Graham, Opposing Mirrors and Video Monitors on Time Delay, 1974
4.6. Beryl Korot, Dachau, four-channel video installation, 1974
4.7. Vito Acconci, performance at Reese Paley Gallery. 1971
4.8. Vito Acconci. Dennis Oppenheim, and Terry Fox, performance, 1971
4.9. Frank Gillette and Ira Schneider, Wipe Cycle, 1969
4.10. Juan Downey, Information Withheld, 1983
4.11. Doug Hall, The Terrible Uncertainty of the Thing Described, 1987
4.12. Martha Rosler. Vital Statistics of a Citizen, Simply Obtained, 1977
4.13. Dara Birnbaum, Technology/Transformation: Wonder Woman, 1978/9
4.14. Joan Jonas. Double Lunar Dogs, 1984
4.15. Chris Burden, L.nardo, Michelangelo, F?embrandt, Chris Burden, 1976
4.16. Daniel Reeves, Smothering Dreams. 1981
4.17. Ant Farm, Media Burn, 1974/5
4.18. Paper Tiger TV, Herb Schiller Smashes the Myths of the Information Industry, 1985
4.19. Paper Tiger TV, Taping the People With AIDS Coalition Talk Back Show, 1988
4.20. The Wooster Group, To You The Birdie. 2002
4.21. Judith Barry. Maelstrom (Part One), 1988
4.22. Jean-Luc Godard, and Anne,Marie Motility, Six Fois Deux (Sur et Sous la Communication), 1976
4.23. Frame from Six Fois Deux, 1976
4.24. Frame from Godard, France/tour/detour/deux/enfants, 1978
4.25. Laurie Anderson, 0 Superman, 1981
4.26. Robert Ashley, Cami//a, 1970
4.27. Miroslaw Rogala. Nature Leaving Us. 1989
4.28. Dara Birnbaum, Damnation of Faust: Evocation, 1984
4.29. Nam June Paik, TV Garden, 1974-78
4.30. Bill Viola, Room for St John of the Cross, 1983
4.31. Stein, Borealis. 1993
4.32. Dieter Froese, Not a Model for Big Brother. Spy Cycle (Unprazise Angaben), 1984
4.33. Julia Scher, detail frorn /’// Be Gentle, 1991
4.34. Mary Lucier, Oblique House, 1993
4.35. Tony Oursler, Horror (from Judy), 1994
4.36. Bill Viola, Slowly Turning Narrative, 1992
4.37. Joan Jonas, Lines in the Sand, 2002
4.38. Eija Liisa Ahtila, The House, 2002
4.39. Doug Aitken, New Skin, 2002
4.40. Gary Hill. Still Life, 1999
4.41. Shirin Neshat. Untitled (Rapture series – Women Scatter.), 1999
4.42. Chantal Akerman, Froth the Other Side, 2002

4.43. Josely Carvalho, Book of Roofs. 2001 

5

5.1. Keith Haring, Untit/ed, 1984 
5.2. Joseph Nechvatal, The Informed Man, 1986 
5.3. Nancy Burson with David Kramlich and Richard Carling. Androgyny (SI, Men and Six Women), 1982 
5.4. Woody Vasulka, Number 6, ca. 1982 
5.5. Janet Zweig. Mind Over Matter, 1993 
5.6. Peter Weibel, The Wall, The Curtain (Boundary, which), also Lascaux, 1994 
5.7. Craig Hickman, Signal to Noise. #1, 1988 
5.8. Ken Feingold, If/Then, 2001 
5.9. www.thesims.corn, 2000 ongoing 
5.10. Tennessee Rice Dixon, Count, 1998 
5.11. Christa S088418! 40d Laurent Mignonneau, Interactive Plant Growing, 1994 
5.12. Christa S088414! 48d Laurent Mignonneau, Interactive Plant Growing, 1994 
5.13. James Seawright, Houseplants. 1983 
5.14. Manfred Mohr, P-159/A, 1973 
5.15. Jill Scott. Beyond Hierarch., 2000 
5.16. Paul Kaiser and Shelly Eshkar, Pedestrian. 2002 
5.17. Harold Cohen, Brooklyn Museum Installation, 1983 
5.18. Gretchen Bender, Total Recall, 1987 
5.19. Gretchen Bender, diagram of monitor arrangements for Total Recall 
5.20. Jenny Holz, Protect Me from What I Want. 1986 
5.21. Jenny Holzer, Laments, 1989 
5.22. Jenny Holzer, Survival Series, Installation at the Guggenheim Museum, New York, 1990 
5.23. Jenny Holzer, installation view, US Pavilion 44, Venice, 1990 
5.24. David Small. Illuminated Manuscript, 2002 
5.25. 80 S44047, Passage Sets/One Pulls Pivots at the Tip of the Tongue, 1994 
5.26. Lynn Hershman, A Room of One, Own, 1993 
5.27. Lynn Hershman, detail, A Room of One, Own. 1993 
5.28. Lynn Hershman, detail, A Room of One, Own. 1993 
5.29. Graham Weinbren, Sonata, 1993 
5.30. Graham Weinbren. detail, Sonata, 1993 
5.31. George Legrady, Pockets Full of Memories, 2001 
5.32. Naoko Tosa, detail, Talking to Neuro Baby, 1994 
5.33. Naoko Tosa, Talking to Neuro Baby. 1994 
5.34. Naoko Tosa, Tathing to Neuro Baby, 1994 
5.35. Zoe Belloff, The Influencing Machine of Miss Natalija A. 2002 
5.36. Pattie Maes, Alive: An Artificial Life, 1994 
5.37. Miroslaw Rogala. Lovers Leap, 1995 
5.38. Miroslaw Rogala, Lovers Leap, 1995 
5.39. Liz Phillips, Echo Evolution, 1999 
5.40. Stephen Vitiello, Frogs in Feedback, 2000. 
5.41. Tim Hawkinson, Uberorgan, 2000 
5.42. Mary Ann Amacher, Music for Sound-Joined Room. 1998 
5.43. Greg Lock, Commute (on Loca(0n), 2002 5.44. Virtual realItY gloves. circa 1994 
5.45. Virt.I reality headset. circa 1904  
5.46. Jeffrey Shaw. Configurting the Cave. 1090 
5.47. Perry Ho.rman. Bar Code Nord. 1994 
5.48. Perry Horman, Bar Code Note1,1090  
5.49. Toni Dove The Comm. Dream from Archaeology of a Mother Tongue, 1693
5.50. Perry Hoberman, Tenets., 1999 
5.51 John 1n Cage, Comperes and 11, 1988 
5.52. Robed Wilson. Philip Glass, Monsters or Theca 1998 
5.53. Robert Wilson, Philip Glass, Monstem of Orem 1598 
5.54. Robert Wilson. Philip Goan, Monsters or Grace, 1998 

6

6.1. Rafael Lozano…eh Vectorial Elevation, 1999/2002 
6.2. Eidekenljghts, 2002 
6.3. Scott Paterson and marina 7urkow, 80 Per …ern.. object with PD, 2002
6.4. W. Bradford Paley, rert Arn 2002/3 
6.5. Roy Ascott. View Nave Laboratory Minna. 1.36 
6.6. Roy Ascot. Organ Conc.,’ d’AINe au Pays Ns Mementos. 1995 
6.7. Sherrie Rabinowitz and Kg Galloway, Satellite Arts Project A Space With No Ceognsphical Boundaries. 1977 
6.8. Sherrie Rabin… and Kit Galloway. Electronic Care NNwork 1984 Mo.k. 1984-2003 
6.9. Eduardo Kac and Woo Nakamura, Essay Concerning Human Understanding, 1994 
6.10. Eduardo Kac and Woo Nakamura. Essay ConnerNng Human Undeistantling 1994 
6.11. Pool Sermon, Thematic Theannng, 1992 
6.12. Paul Sermon, Telematic YThon, 1992 
6.13. Lode Novak, Coked. Visions, webs., 1996 
6.14. Yael Kanarek. of ewe, 1.5 ongoing 
6.15. Nomads a. Gediminas Urbonas, Transaction, 2000 ongoing 
6.16. Wayne 02616y, The Degradation and Removal of The/a BlaTh Mala, 2001 
6.17. Emily Hartrell and Nina Sobell. c Sat Hem 1995 
6.18. David Blair. Was, or Me Discovery of TerevNion 0171052 The Bees, 1988 
6.19. Mummies, The The Room. 1985 
6.20. Giselle Reiguelman, Egosoopio, 2002 
6.21. Perry Bard. Walk This Way. 2001
6.22. Mark Napier. Riot 1999 
6.23. Josh On, They Rule, 2001  
6.24. Alex GalNway and RSG. Carnivore 2001-3
6.25. John Klima, Ecosystem.2001 
6.26 Nancy Paterson, Sitoce Mani. Shirt 1998 
6.27. Mary Flanagan, collection. 2001 
6.28. Robed Nide., ProTh 2002  
6.29. tsunamihnet. Charles Lim, Vi Yong, end Tien Woon, Alpha 3.3, 2002 
6.30. Marek Walczak, Helen Thorington. and Jesse Gilbert. Adrift 2001 
6.31. Androja Koluncic, Distribirted Justine Web work, 2002 
6.32. Critical Art Ensemble, Wercome to a World Without 8018388 1984 

Digital Currents: Art in the Electronic Age

Older Posts »

Powered by WordPress

error: Content is protected !!