Selected Courses on Digital Art-UOWM

23 Ιουνίου 2020

Digital Aesthetics: Introduction

Filed under: NOTES ON MEDIA ARTS,ΚΕΙΜΕΝΑ — admin @ 16:49
 
 
Digital Aesthetics: Introduction
Claudia Giannetti
The early twentieth century saw the formation in various fields of new theoretical approaches sharing a skeptical attitude towards the fundamental certainties that had profoundly influenced occidental culture and science. Towards the mid-twentieth century concepts like truth, reality, reason and knowledge became central in an intensive contest between rationalism and relativism. In the course of this debate, several theories were dissociated from the self-referential character of their scientific disciplines and increasingly placed in correlation with other fields. Examples of metadisciplinary models include the cybernetic analysis of message transmission and man-machine communication or, more recently, postmodernist philosophy and its notion of ‹contaminated,› ‹weak› thinking. [1] This relativism manifested itself in various aspects of art: as an essential component in the process of producing experimental art from the first avantgarde movements onward; in the radical transformation of the forms of art reception; in the tendency to interconnect and establish interchange among various art genres (discernible in interventionist and interdisciplinary works or ‹mixed media›); and finally in the intensified exchange among art, science, and technology. Artistic practice appropriated new media—initially photography and film, later video and computer—and new communication systems—post and telephone, followed by television and Internet. Under this premise, and above all from the 1960s onward, a gradual shift set in away from academic, orthodox positions attempting to confine art to traditional techniques, and aesthetics to ontological foundations.
However, the profound transformations resulting from these new approaches did not invariably meet with understanding, let alone acceptance, from artists. If one further takes into consideration the recently re-ignited controversy about the long-predicted crises of art and philosophical aesthetics, as well the widespread discourse among postmodernist writers which was linked to tendencies in technological and academic theory, then everything does in fact seem to point toward a disintegration of art and aesthetics. Yet a large part of such polemics can be attributed to the fact that aesthetic theory and artistic practice have gone separate ways. Artists’ increasing use of technology is bringing to light a far-reaching and on-going discrepancy between artistic perception, art theory, and aesthetics, which are seen to be notably diverging instead of developing synchronously and congruently. This gulf between theoretical «corpus» and artistic practice culminates in a paradox that without doubt leads to the often proclaimed end of art.
Nevertheless the conviction remains that certain symptoms of transition cannot be immediately equated with the radical disintegration of the fields involved. It is rather the case that new intellectual approaches and modes of experiencing must be found in order to enable the analysis and assimilation—as opposed to rejection—of the contemporary phenomena. One access route to these new forms is shown by the theory and practice of media art, and of interactive media art especially, whose renewing concepts are discernible in the fact that aesthetic the+ory is no longer focussed exclusively on the art object itself, but on its process, on system and contexts, on the broad linkage of different disciplines, and on reformulating the roles of the maker and the viewer of a work of art.
The complex process of transformation undergone by art and aesthetics, as well as the closely intermeshed interdisciplinary relationships, can be understood only by investigating those phenomena and theories which have so far driven forward the syntopy [2] of art, science, and technology, and in the future will continue to do so. It is not sufficient to describe the current state of art by concentrating on its epicenter; instead one must expand the horizon of consideration to adjacent fields and trace the historical developments in which corresponding changes and contemporary phenomena can be discerned. One aim of this hypertext monograph is to work out an aesthetic concept inherently formed by the context and creative experience of interactivity-based works, as well as their presentation and reception. The intention is to show potential paths towards a renewal of aesthetic discourses: paths already smoothed by those pioneers and artists whose tracks this essay follows. In this way various concepts of science, technology, and art are linked with a view to revising the notions of art, aesthetics, and spectator.
Without a doubt the artistic use of new technologies and the specific current forms of interlocking science and art lead to diverse formulations of questions—of practical and formal, as well as conceptual and philosophical nature—to which only future developments will deliver an answer. The «Aesthetics of the Digital» addresses several of these principle questions. Some contain possible answers, others lead to new questions that open up space for further considerations.
Translation: Tom Morrison
ART, SCIENCE, AND TECHNOLOGY
Claudia Giannetti
 
Art – science – art
Deliberations on the connection between art and science have various points of departure. The most general considerations are limited to the assumption of a parallel development. In his writings published in 1970, Werner Heisenberg, who along with Max Planck counts as a founding father of quantum theory, stated that the tendencies towards abstraction in the sciences were comparable with those in the field of art. According to Heisenberg, new artistic and scientific forms can result only from new content, but the converse does not apply. To renew art or to revolutionize science, he wrote, meant to create new content and concepts, and not just new forms. [1]
A question more complex than that of parallels between art and science is the extent to which art influences the sciences. According to Peter Weibel, this question can be answered only methodologically, that is by applying a comparison which views art and science as methods. While science, says Weibel, is distinctly methodological in character, art is generally not regarded as a method: «This is our first claim: art and science can only reasonably be compared if we
accept that both are methods. This does not mean that we declare that both have the same methods. We only want to declare that both have a methodological approach, even if their methods are or can be different.»[2]
Accordingly it would be permissible to view art and science as convergent in the methodological sense. As Weibel sees it, science is influenced by art in regard to its methods, but not by its products and references: «Because any time science develops the tendency for its methods to become too authoritarian, too dogmatic, science turns to art and to the methodology of art which is plurality of methods.» [3] The objective nature exists neither in the framework of the sciences nor in culture regardless of social construction, «art and science meet and converge in the method of social construction.» [4]
This position finds its most radical expression in the science-theoretical contributions of Paul Feyerabend. As a critic of scientific rationalism, he develops new interpretations and connections among the arts and sciences. He is of the opinion that artists and scientists developing a style or theory frequently pursue a secondary intention, namely that of representing ‹the› truth or ‹the› reality. However, artistic styles are closely connected with styles of thought.
That which a specific form of thought understands concepts such as truth or reality to mean is what this form of thought asserts as truth. When one decides in favor of a style, a reality, or a form of truth, then one always chooses a human-made construction. In other words, Feyerabend negates the possibility of absolute rationality and logic in regard to that which is created by the human mind. He asserts that this relativist, and in a certain sense irrational, factor inherent in every branch of science places science in the proximity of art. According to Feyerabend, the sciences are not an institution of objective truth, but are arts along the lines of a progressive understanding of art. [5]
Feyerabend’s line of argument reflects the skepticism that deeply influenced occidental culture and science well into the twentieth century. The aforementioned questions of truth, reality and reason are central components of the contest between rationalism and relativism affecting art no less directly
than science. If the nature of science were to be considered a research method under the premises of reality, plausibility, and dialectics, then whoever attempted to identify these three principles by strictly observing the complexity of the objects would, according to the Spanish scientist Jorge Wagensberg, reach the conclusion that the object resisted the method. The only manner of proceeding would be to «soften up» the method, with the result that «science is transformed into ideology.» «At its core ideology means not research, but faith. It follows from this consideration that one must stop with ideology all the holes which science has itself failed to stop. […] If the knowledge towards which we aspire is ruled not by laws but by world-views, then it would seem expedient to take our leave of scientific methods, and perhaps even adopt principles radically opposed to the latter. Precisely that is the case in art, in a kind of knowledge whose creators have not the least interest in distancing themselves from their creation.»[6]
Of particular relevance to the understanding of a new interpenetration of art and science is the generative nature of either area, which brings forth words or world-views of its own. For that reason, «the worlds of art and science are ideologically no longer opposites,» as Ilya Prigogine states, «the variety of the significates and the basic opacity of the world are reflected in new languages and new formalisms.» [7]
The origins of information theory
The technological revolution received its fundamental impetus from the first industrial revolution in the nineteenth century. By starting up a process of mechanization, the industrial revolution triggered the phenomenon of crises of control. [8] The mounting production levels resulting from mechanization led to the need for control systems to accelerate the flow of information. Researchers began to seek solutions in feedback techniques, automatic control systems, and information processing.
Under the title «On Governors» in 1868, Clerk Maxwell presented the first theoretical study towards an analysis of control and feedback mechanisms, so ushering in the radical transformation in automatic control engineering. By the late nineteenth century, a series of developments and technical innovations were
underway that in the 1940s would serve as the basis of a new theory, namely cybernetics. [9]
The control revolution produced not only feedback techniques and a new hierarchization of media, but also revolutionized the cultural reproduction forms of society. [10] This included areas like communications and art, since the technologies exercised a direct influence on the forms of sociocultural (re)production.
Until then, nevertheless, the themes associated with control mechanisms and automation were discussed in connection with only one common parameter, namely energy. As the basic concept of Newtonian mechanics, energy retained the same position in the natural sciences and in research fields like acoustics, electrical science, and optics. The invariant of ‹mass› similarly occupied a central position in physics. However, as production techniques continued to be improved, so the relationship of human and machine began to change likewise, leading to the emergence of questions about new terms and theories able to make this communication process between biological and technological systems the object of targeted research.
The constitution of two new disciplines: cybernetics and artificial intelligence
That «society can only be understood through a study of the messages and the communication facilities which belong to it; and that in the future development of these messages and communication facilities, messages between man and machines, between machines and man, and between machine and machine, are destined to play an ever increasing part,» [11] was the key idea of the American mathematician Norbert Wiener (1894–1964), which he elaborated in his book «The Human Use of Human Beings. Cybernetics and Society,» published in 1950 after a first technical study «Cybernetic, or Control and Communication in the Animal and in the Machine» of 1948. In 1950 likewise, the British mathematician Alan Turing (1912–1954) raised the question of the feasibility of logical thought by machines. In his essay «Computing Machinery and Intelligence,» published in volume 59 of the philosophical journal «Mind,» Turing proceeds from the basic question with which his text begins: «Can machines think?»
Until the mid-twentieth century no more than a few researchers working in isolation were concerned with subjects such as communication between dissimilar systems (for instance, biological and technical systems), or with the feasibility of technically designing thought machines. In addition to Wiener and Turing their ranks included Charles Babbage, Claude Shannon, Warren Weaver and Hermann Schmidt. However, from the 1950s on these subjects rapidly became two fields of basic research: cybernetics and Artificial Intelligence. [12] The two aforementioned texts triggered a flood of publications containing speculation and analysis on these subjects. In the first three years after 1950 alone, more than a thousand essays published dealt with intelligence and with communication with and between machines. Yet, when Turing published his essay there existed no more than four digital computers worldwide (Mark I and EDSAC in England, ENIAC and BINAC in the USA). [13]Although Turing’s theorem—everything the human mind can do in the form of an algorithm can also be carried out by a Universal Turing Machine—was based on models so far investigated only as a hypothetical experiment, several researchers were inspired to empirically confirm or disprove it by building machines.
Communication
The approach of cybernetics—a name derived from the Greek term ‹kybernetes› (steersman)—consists in transferring the theory of control and message transmission, whether in the machine or in a living being, to the fields of communication and machine control. The objective is to investigate the relationships between animal and machine, and in the case of the machine the specific mode of its behavior, as a characteristic of the performance to be expected. [14]
On the basis of the observation of certain analogies between machines and living organisms, the mathematician asserts that no reason actually exists not to make a machine resemble a human being, since one and the other develop tendencies toward decreasing entropy, meaning that both are examples of local anti-entropic phenomena.
Turing likewise conceded priority to the subject of communication. His famous experiment—the imitation
game, as he called it, also known as the Church-Turing thesis or Turing Theorem—for verifying the intelligence of a computer was concerned less with the actual construction of such a machine than with simulating with machines the human capability of communication. Turing was here acting in line with a tradition of measuring the faculty of thought by the ability to use human language. Descartes had already presented the logically semantic usage of language as a criterion for identifying thinking beings. For a long time, the mastery of semantics would remain a basic problem of Artificial Intelligence.
Information
In contrast to that tradition Wiener’s cybernetics sought operational ways of developing a specific language that would enable communication between dissimilar systems, and aimed to adapt semantics to specific goals in the process. Viewed from this perspective, Wiener’s theory replaced the notion of energy with that of information as the elementary parameter of communication, and thus postulated the definition of this new invariant for cybernetic science as a whole, which is a basic prerequisite for understanding the range of the cybernetic approach.
Unlike Newton’s mechanics, which operates with closed systems, information is applied to open systems. In this way it must be seen as a key enabling linkage and communication between dissimilar systems, and between the latter and the external world. ‹Mass› and ‹energy› are directly related to matter in the natural sciences, whereas ‹information› is not conveyed by any substance, but is based on variable properties: Information can be reproduced (duplicated or copied), destroyed (erased), or reiterated (repeated). «Information is a name for the content of what is exchanged with the outer world as we adjust to it, and make our adjustment felt upon it. The process of receiving and of using information is the process of our adjusting to the contingencies of the outer environment, and of our living effectively within that environment.» [15] To this extent, not the possible quantity of circulating information is crucial to the effectiveness of communication, but the degree to which this information is integrated into communication. Along the lines of cybernetics, then,
significant information is not the entirety of all information transmitted, but that information which passes through the ‹filters.›
Feedback
In the field of information and communication Wiener devoted particular attention to the question of automatons and the development of feedback models. His core interest lay in investigating machines capable of evaluating input and of integrating the stored experience into the further feedback loops. In this respect, feedback is a method of making systems self-regulating, by which the results of preceding activities are re-integrated in the procedural sequence and thus enable runtime corrections to be made permanently. To this end, machines must be capable of learning processes.
Although his approaches and conclusions are very different from those of Wiener, Turing in his essay likewise clearly indicated the necessity of developing systems capable of learning. Devoted to the subject of learning machines, the essay takes as its starting point the principle that «education can take place provided beyond that for the development of interactive, digital systems. This communication channel permits bi-directional information exchange, and therefore also learning processes. On the basis of this method, Turing repudiated the thesis set up by Ada Lovelace in 1842. [22]Using investigations made with the ‹analytic machine› of Charles Babbage, Lady Lovelace had claimed that a machine can do only that which it is instructed to do, and therefore is never capable of producing anything truly new. [23] Turing contradicted this thesis with the question «who can be certain that ‹original work› that he has done was not simply the growth of the seed planted in him by teaching, or the effect of following well-known general principles?» [24] He further pointed out that the machine must be to a certain degree ‹undisciplined› or random-controlled in order that its behavior can be considered intelligent. [25]
Precisely this element of chance was what lent the machine ‹creative› ability, namely the ability to solve problems. Although discrete machines that could pass the Turing Test are feasible, they would succeed not because they were replicas of the human brain but because they would have been programmed accordingly. As Turing himself realized, the basic problem lies in the area of programming. In fact, it was not necessary to wait the fifty years assumed by Turing in order to program «computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than a 70 per cent chance of making the right identification after five minutes of questioning.» [26] The programs have been written, and have passed the Turing Test with a high degree of interactivity. One might therefore conclude that the problem is not solely confined to investigating the possibilities of Artificial Intelligence. [27]
Viewed from the contemporary perspective, cybernetics and AI cannot be reduced to solely scientific, economic, or technical interest. Since these theories belong to a socio-technical field in which communication structures, world-views and people-views are formed and transformed, they are concerned with philosophic issues of perception, cognition, language, ethics, and aesthetics. If  information technology is basically working towards the automation of mental processes, then it directly or indirectly reaches into disciplines concerned with human cognition or creativity.
Translation by Tom Morrison

23 Μαρτίου 2015

GLOSSERY OF TERMS

Filed under: NOTES ON MEDIA ARTS,NOTES ON VIDEO,ΚΕΙΜΕΝΑ — Ετικέτες: , — admin @ 13:27
GLOSSERY OF TERMS 
1-bit color -The lowest number of colors per pixel in which a graphics file can be stored. In 1-bit color, each pixel is either black or white.
8-bit color/grayscale -In 8-bit color, each pixel is has eight bits assigned to it, providing 256 colors or shades of gray, as in a grayscale image.
24-bit color -In 24-bit color, each pixel has 24 bits assigned to it, representing 16.7 million colors. 8 bits – or one byte – is assigned to each of the red, green,
and blue components of a pixel.
32-bit color – A display resolution setting that is often referred to as true color and offers a color palette of over 4 billion colors or 2^(3)^(2).
Additive Colors – Red, Green, and Blue are referred to as additive colors. Red+Green+Blue=White.
Algorithm -The specific process in a computer program used to solve a particular problem.
Aliasing -An effect caused by sampling an image (or signal) at too low a rate. It makes rapid change (high texture) areas of an image appear as a
slow change in the sample image. Once Aliasing occurs, there is no way to accurately reproduce the original image from the sampled image.
Analog -Analog transmitted data can be represented electronically by a continuous wave form signal. Examples of analog items are traditional photographed
images and phonograph albums.
Anti-Aliasing – The process of reducing stair-stepping by smoothing edges where individual pixels are visible.
Application -A computer software program designed to meet a specific need.
Binary -A coding or counting system with only two symbols or conditions (off/on, zero/one, mark/space, high/low). The binary system is the basis
for storing data in computers.
Bit – A binary digit, a fundamental digital quantity representing either 1 or 0 (on or off).
Bitmap(BMP) -An image made up of dots, or pixels. Refers to a raster image, in which the image consists of rows or pixels rather than vector coordinates.
Channel – One piece of information stored with an image. True color images, for instance, have three channels-red, green and blue.
Chroma – The color of an image element (pixel). Chroma is made up of saturation + hue values, but separate from the luminance value.
CMYK (Cyan, Magenta, Yellow, Black) -One of several color encoding system used by printers for combining primary colors to produce a full-color image. In
CMYK, colors are expressed by the “subtractive primaries” (cyan, magenta, yellow) and black. Black is called “K” or keyline since black, keylined text appears on
this layer.
Compression – The reduction of data to reduce file size for storage. Compression can be “lossy” (such as JPEG) or “lossless” (such as TIFF LZW). Greater
reduction is possible with lossy compression than with lossless schemes.
Continuous Tone – An image where brightness appears consistent and uninterrupted. Each pixel in a continuous tone image file uses at least one byte each for
its red, green, and blue values. This permits 256 density levels per color or more than 16 million mixture colors.
Digital vs. analog information – Digital data are represented by discrete values. Analog information is represented by ranges of values, and is therefore less
precise. For example, you get clearer sound from an audio CD (which is digital) than from an audiocassette (which is analog). Computers use digital data.
Desktop Publishing – Describes the digital process of combining text with visuals and graphics to create brochures, newsletters, logos, electronic slides and
other published work with a computer.
Digital – A system or device in which information is stored or manipulated by on/off impulses, so that each piece of information has an exact or repeatable
value (code).
Digitization – The process of converting analog information into digital format for use by a computer.
Dithering – A method for simulating many colors or shades of gray with only a few. A limited number of same-colored pixels located close together are seen as
a new color.
Download – The transfer of files or other information from one piece of computer equipment to another.
DPI (Dots Per Inch) -The measurement of resolution of a printer or video monitor based on dot density. For example, most laser printers have a resolution of
300 dpi, most monitors 72 dpi, and most PostScript imagesetters 1200 to 2450 dpi. The measurement can also relate to pixels in an input file, or line screen
dots (halftone screen) in a pre-press output film.
Driver – software utility designed to tell a computer how to operate an external device. For instance, to operate a printer or a scanner, a computer
will need a specific driver.
Firewire – A very fast external bus that supports data transfer rates of up to 400 MBPS. Firewire was developed by Apple and falls under the IEEE
1394 standard. Other companies follow the IEEE 1394 but have names such as Lynx and I-link.
FTP (File Transfer Protocol – An abbreviation for File Transfer Protocol and is a universal format for transferring files on the Internet.
GIF File Format – Stands for Graphic Interchange Format, a raster oriented graphic file format developed by CompuServe to allow exchange of
image files across multiple platforms.
Gigabyte (GB) -A measure of computer memory or disk space consisting of about one thousand million bytes (a thousand megabytes). The actual value is
1,073,741,824 bytes (1024 megabytes).
Gray Scale – A term used to describe an image containing shades of gray as well as black and white.
Halftone Image – An image reproduced through a special screen made up of dots of various sizes to simulate shades of gray in a photograph. Typically used for
newspaper or magazine reproduction of images.


Hue -A term used to describe the entire range of colors of the spectrum; hue is the component that determines just what color you are using. In
gradients, when you use a color model in which hue is a component, you can create rainbow effects.
Image Resolution – The number of pixels per unit length of image. For example, pixels per inch, pixels per millimeter, or pixels wide.
Import – The process of bringing data into a document from another computer, program, type of file format, or device.
Jazz Drive – A computer disk drive made by Iomega that enables users to save about 1000 megabytes or 1Gigabyte of information on their special
disks.
JPEG (Joint Photographic Experts Group) -A technique for compressing full-color bit-mapped graphics.
Kilobyte – An amount of computer memory, disk space, or document size consisting of approximately one thousand bytes. Actual value is 1024
bytes.
Lossless compression – Reduces the size of files by creating internal shorthand that rebuilds the data as it originally were before the compression.
Thus, it is said to be non-destructive to image data when used.
Lossy compression – A method of reducing image file size by throwing away unneeded data, causing a slight degradation of image quality. JPEG is a
lossy compression method.
Mask – A defined area used to limit the effect of image-editing operations to certain regions of the image. In an electronic imaging system, masks
are drawn manually (with a stylus or mouse) or created automatically–keyed to specific density levels or hue, saturation and luminance values in the
image. It is similar to photographic lith masking in an enlarger.
Megabyte (MB) – An amount of computer memory consisting of about one million bytes. The actual value is 1,048,576 bytes.
Moire – A visible pattern that occurs when one or more halftone screens are misregistered in a color image. Multimedia – This involves the combination of two
or more media into a single presentation. For example, combining video, audio, photos, graphics and/or animations into a presentation.
Network – A group of computers connected to communicate with each other, share resources and peripherals.
Palette – A thumbnail of all available colors to a computer or devices. The palette allows the user to chose which colors are available for the
computer to display. The more colors the larger the data and the more processing time required to display your images. If the system uses 24-bit
color, then over 16.7 million colors are included in the palette.
Pixel (PICture ELement) -The smallest element of a digitized image. Also, one of the tiny points of light that make up a picture on a computer screen.
PNG (Portable Network Graphics) – pronounced ping. A new standard that has been approved by the World Wide Web consortium to replace GIF because
GIF uses a patented data compression algorithm. PNG is completely patent and license-free.
PostScript – A page description language developed by Adobe Systems, Inc. to control precisely how and where shapes and type will appear on a page.
Software and hardware may be described as being PostScript compatible.
RAM – Random Access Memory. The most common type of computer memory; where the CPU stores software, programs, and data currently being used.
RAM is usually volatile memory, meaning that when the computer is turned off, crashes, or loses power, the contents of the memory are lost. A large amount
of RAM usually offers faster manipulation or faster background processing.
Raster – Raster images are made up of individual dots; each of which have a defined value that precisely identifies its specific color, size and place within the
image. (Also known as bitmapped images.)
Render – The final step of an image transformation or three-dimensional scene through which a new image is refreshed on the screen.
Resize – To alter the resolution or the horizontal or vertical size of an image. Resolution – The number of pixels per unit length of image. For example, pixels per
inch, pixels per millimeter, or pixels wide.
RGB – Short for Red, Green, and Blue; the primary colors used to simulate natural color on computer monitors and television sets. Saturation – The degree to
which a color is undiluted by white light. If a color is 100 percent saturated, it contains no white light. If a color has no saturation, it is a shade of gray.
Software – Written coded commands that tell the computer what tasks to perform. For example, Word, PhotoShop, Picture Easy, and PhotoDeluxe
are software programs
Subtractive colors – Transparent colors that can be combined to produce a full range of color. Subtractive colors subtract or absorb elements of
light to produce other colors.
TIFF (Tagged Image File Format) -The standard file format for high-resolution bit-mapped graphics. TIFF files have cross-platform compatibility.
TWAIN – Protocol for exchanging information between applications and devices such as scanners and digital cameras. TWAIN makes it possible for
digital cameras and software to “talk” with one another on PCs.
Unsharp Masking – A process by which the apparent detail of an image is increased; generally accomplished by the input scanner or through
computer manipulation.
USB (Universal Serial Bus) -The USB offers a simplified way to attach peripherals and have them be recognized by the computer. USB ports are about 10
times faster than a typical serial connection. These USB ports are usually located in easy to access locations on the computer.
Virtual Memory -Disk space on a hard drive that is identified as RAM through the operating system, or other software. Since hard drive memory is often less
expensive than additional RAM, it is an inexpensive way to get more memory and increase the operating speed of applications.
WYSIWYG – What You See Is What You Get. Refers to the ability to output data from the computer exactly as it appears on the screen.

2 Μαρτίου 2015

Doc Fortnight 2015

MoMA Film – Doc Fortnight 2015

https://www.youtube.com/playlist?list=PLfYVzk0sNiGHqXHhBr2gTSOy0VMXZ2So3

[youtube https://www.youtube.com/watch?v=ImX648BEVig] ΦΩΤΟ
https://www.youtube.com/watch?v=ImX648BEVig
—————————————
[youtube https://www.youtube.com/watch?v=bgcSt-5UIuw]

Published on Mar 4, 2013
A production by Stedelijk Museum Amsterdam for ARTtube. More info: http://arttube.nl/en/Stedelijk/Mike_K…

Widely acknowledged as an artist who defined his era, Mike Kelley (1954–2012) created a stunning and protean legacy that encompasses painting, sculpture, works on paper, installation, performance, music, video, photography, collaborative works and critical texts. In the largest exhibition of his work ever organized—and the first comprehensive survey attempted since 1993—the Stedelijk Museum Amsterdam presentation of Mike Kelley brings together over 200 works, spanning the artist’s 35-year career.

Organized chronologically for the most part, Mike Kelley fills virtually all of the 1792-square-meter (19.289-square-foot) temporary exhibition space in the new building of the expanded Stedelijk Museum. The exhibition will constitute an overview of the artist’s work from the mid-1970s until shortly before his death, allowing visitors to understand and appreciate the full scope of his achievements.

“Mike Kelley’s brilliance was rooted in his ability to dig critically into a world of cultural productions, representations, and constructions in all their messy contradictions, using a combination of incisive wit, poetic insight and uncanny associative power,” Ann Goldstein commented. “Nothing is sacrosanct in his work—not so-called high culture, history, literature, music, philosophy, psychology, religion or education. In bringing together his interest in so-called low culture—from crafts to comic strips—with a reconsideration of identity and sexuality, he was nothing less than revelatory.”

Credits

Music by Mike Kelley: Day is Done – Original Motion Picture Soundtrack (2005)

Photo: Mike Kelley (Wayne, MI (US), 1954 – South Pasadena, CA (US), 2012): Banana Man Costume (1981), Lifesize Courtesy Mike Kelley Foundation for the Arts

Met dank aan / Thanks to: Mike Kelley Foundation, Claire van Els, Rixt Hulshoff Pol, Dorine van Kampen
Interviews & research: Fieke Tissink, Eline Timmer
Camera & editing: Maaike Sips

Production: Bobcat Media

[youtube https://www.youtube.com/watch?v=Fc3hdDNJsLw]

http://www.shiva-n.com

27 Φεβρουαρίου 2015

Warhol short movies

https://www.youtube.com/playlist?list=PLjcCm_aLk87RSfQbpCG_umpLYhladPImD

Powered by WordPress

error: Content is protected !!