Language as the Ultimate Weapon Entry.
Language, for the majority of the people it's "just" a system to communicate, but for others it could be the perfect tool to manipulate societies. If used properly, language has the power to mask the truth and mislead the public. Which can create a society with unquestioningly obeying their government and mindlessly accept all propaganda as reality.
Several psychologists theorists says that words that are available for the purpose of communicating though tend to influence the way people think. Even the relations that words had with our brain could affect memories, thoughts an perceptions.
In Orwell's novel the media is such a powerful tool for manipulation, because the public is widely exposed to it, and the public trusts it. The party expose the public to propaganda that serves to manipulated them, day by day.
New Speak doesn't have any words with negative connotations meaning that when the media wants to communicate something no one can interpret it as negative or bad, This directly affects because it disabled the people to express or think other than positive.
The media in Orwell's novel is skilled at engineering "truth" through language, and one of the most disturbing consequences of this developed in the novel is that the party has ultimate control over history. Since languages is the link to history. This reminds me of one of the quotes of the book:
"Who controls the past, controls the future: who controlls the present controls the past".
Orwell's novel asks the philosophical question: if all available evidence show something to be true, is it not true?
Part of the importance of the novel has to do with the relation it has with the present and the actual history since the alteration of history has already been used. In the Stalinist era there was much distortion of History, so that it appears that Trotsky was first the hero of the Civil War.
There are other similarities between our reality and the novel, just look to advertising, and industry based in manipulation of language and thought.
As Berkes says Orwell's novel carries an important warning about the powers of language and how it can be used. "Languages is one of the key instruments of political dominations, the necessary and insidious means of the 'totalitarian' 'control of reality' (Rai, 122).
Programming Languages 201911 Beto Vas
miércoles, 10 de abril de 2019
miércoles, 20 de marzo de 2019
Pair Programming
Pair Programming Entry
The article introduces us with one of the many programming techniques, Pair programming which "is a practice in which two programmers work side-by-side at one computer, continuously collaborating on the same design, algorithm, code, or test."
This technique has many advantages as improving productivity and quality of a software product, moreover, a survey found that programmers were more "confident" in their solutions when programming in pairs as opposed to working alone.
The largest example of it's accomplishment is the sizable Chrysler Comprehensive Compensation system launched in May 1997. After initial development problems, Beck and Jeffries restarted the development using XP (Extreme Programming), Beck and Jeffries trained their team in the new methodology and in no time there was a remarkable change, a full room of paired programmers working on the same code at one computer, they were delivering tested (nearly 100% bug free) code faster than ever, the groups completed the task 40% more quickly and effectively by producing better algorithms and code in less time.
But not all is perfection, there is a serial of principles of why pair programming works with such success: one person is typing and the other one is always reviewing the work this person needs to be cautious with not become a passive observer, it has to be engaged in the program.
Also, it's crucial to avoid having excess ego in a member, this is prejudicial to the technique, that "pro" programmer will not be opened to other points of view and will always do thing his "way".
There is a certain resilience to make the transition to pair programming. But most make this transition with great success, the result speaks by itself, if your project or product fits to apply pair programming you should really consider it as an option.
The article introduces us with one of the many programming techniques, Pair programming which "is a practice in which two programmers work side-by-side at one computer, continuously collaborating on the same design, algorithm, code, or test."
This technique has many advantages as improving productivity and quality of a software product, moreover, a survey found that programmers were more "confident" in their solutions when programming in pairs as opposed to working alone.
The largest example of it's accomplishment is the sizable Chrysler Comprehensive Compensation system launched in May 1997. After initial development problems, Beck and Jeffries restarted the development using XP (Extreme Programming), Beck and Jeffries trained their team in the new methodology and in no time there was a remarkable change, a full room of paired programmers working on the same code at one computer, they were delivering tested (nearly 100% bug free) code faster than ever, the groups completed the task 40% more quickly and effectively by producing better algorithms and code in less time.
But not all is perfection, there is a serial of principles of why pair programming works with such success: one person is typing and the other one is always reviewing the work this person needs to be cautious with not become a passive observer, it has to be engaged in the program.
Also, it's crucial to avoid having excess ego in a member, this is prejudicial to the technique, that "pro" programmer will not be opened to other points of view and will always do thing his "way".
There is a certain resilience to make the transition to pair programming. But most make this transition with great success, the result speaks by itself, if your project or product fits to apply pair programming you should really consider it as an option.
miércoles, 13 de marzo de 2019
The Secret History of Women in Coding
The Secret History of Women in Coding
The History of coding has had an unusual development, unlike other areas coding is relatively new, therefore in the early 50's there was no majors or studies for programming, it was until 1959 when a brand new discipline was "established".
For this reason in the beginning of the coding discipline the requirements for working in this area were nos strict, many women saw an opportunity and applied for "programming" jobs, such is the case of Mary Allen Wilkes who quickly became a programming wiz and was part of important computing projects by the pass of the years. Years passed, the World War II happened, women take and active part helping with activities like code-breaking, "programing" ,etc.
Women were intrinsically related to the evolution of the computing area they were a key part of it,
in the '50s and '60s computer companies were fair when hiring staff, there was no prejudices, if they contract someone it was because of their aptitudes and not because of his gender, look or nationality, the programmer work force in some companies like Raytheon was 50 percent men and 50 percent women, everything was doing great but that doesn't last long, by 1983 the percent of women who were graduating in computer and information sciences started to drop, the following years drop even more reaching 17.6 percent in 2010 but what happened? how does the discipline passed from having an equally population to 17.6 percent?
When personal computers arrived boys were most likely to have been given a computer by their parents, since computers represented something related to electricity, something "mechanic", the families "establish" that it should be manipulated by the son with help of the father, something that didn't include girls. Then this was reflected to the school where men had a lot of experience and girls had almost zero experienced, this was not the problem, the problem was that the school didn't make an effort to help initiate students with no experience, they were basically focusing on the people that already had experience because they happened to had a computer from years before, the atmosphere itself worsened this, the ones that already have experience made them feel isolated and dumb to the ones without.
In consequence by the ’80s, the early pioneering work done by female programmers had mostly been forgotten. And Hollywood didn't help, they were creating the image that computers were male domain. Just look at movies like Tron, WarGames, the computer nerds were "always" young white men. Even the corporate sector started to contract personal following their appearance and not their aptitudes.
According to data from the Bureau of Labor Statistics, on the last year (2018) about 26 percent of the workers in “computer and mathematical occupations” were women, is still to far from the good old days when it was 50% vs 50%. This data indicates the need for computing programs to be opened and to consider less experienced people, and to aware everyone that we have to be more inclusive and tolerant to people who is new in the area, also it's essential to put effort in transform the corporate sector since there still exist so much prejudices in a lot of men who think that women are not prepared and capable to occupy some important charges that involves IT.
The History of coding has had an unusual development, unlike other areas coding is relatively new, therefore in the early 50's there was no majors or studies for programming, it was until 1959 when a brand new discipline was "established".
For this reason in the beginning of the coding discipline the requirements for working in this area were nos strict, many women saw an opportunity and applied for "programming" jobs, such is the case of Mary Allen Wilkes who quickly became a programming wiz and was part of important computing projects by the pass of the years. Years passed, the World War II happened, women take and active part helping with activities like code-breaking, "programing" ,etc.
Women were intrinsically related to the evolution of the computing area they were a key part of it,
in the '50s and '60s computer companies were fair when hiring staff, there was no prejudices, if they contract someone it was because of their aptitudes and not because of his gender, look or nationality, the programmer work force in some companies like Raytheon was 50 percent men and 50 percent women, everything was doing great but that doesn't last long, by 1983 the percent of women who were graduating in computer and information sciences started to drop, the following years drop even more reaching 17.6 percent in 2010 but what happened? how does the discipline passed from having an equally population to 17.6 percent?
When personal computers arrived boys were most likely to have been given a computer by their parents, since computers represented something related to electricity, something "mechanic", the families "establish" that it should be manipulated by the son with help of the father, something that didn't include girls. Then this was reflected to the school where men had a lot of experience and girls had almost zero experienced, this was not the problem, the problem was that the school didn't make an effort to help initiate students with no experience, they were basically focusing on the people that already had experience because they happened to had a computer from years before, the atmosphere itself worsened this, the ones that already have experience made them feel isolated and dumb to the ones without.
In consequence by the ’80s, the early pioneering work done by female programmers had mostly been forgotten. And Hollywood didn't help, they were creating the image that computers were male domain. Just look at movies like Tron, WarGames, the computer nerds were "always" young white men. Even the corporate sector started to contract personal following their appearance and not their aptitudes.
According to data from the Bureau of Labor Statistics, on the last year (2018) about 26 percent of the workers in “computer and mathematical occupations” were women, is still to far from the good old days when it was 50% vs 50%. This data indicates the need for computing programs to be opened and to consider less experienced people, and to aware everyone that we have to be more inclusive and tolerant to people who is new in the area, also it's essential to put effort in transform the corporate sector since there still exist so much prejudices in a lot of men who think that women are not prepared and capable to occupy some important charges that involves IT.
miércoles, 27 de febrero de 2019
The Roots of Lisp Entry
The Roots of Lisp
"In 1960, John McCarthy published a remarkable paper in which he did for programming something like what Euclid did for geometry. He showed how, given a handful of simple operators and a notation for functions, you can build a whole programming language."
He called this language Lisp, for "List Processing".
According to the author, by the time the article was written (2002) there was 2 clean consistent models of programming, the C model and the Lisp model. With the increase of the power on the computers, the new languages being developed have been moving toward the Lisp model. Some of them grab parts of both. The article McCarthy published has a lot to do with this, since the way it dissects Lisp.
One of the key features of Lisp is that it can be written in itself (reviewed later).
To understand how John sees Lisp we have to know there are Seven Primitive Operators in Lisp, here they are :
Expression, which is either an atom (a sequence of letters) or a list of zero or more expressions, separated by whitespace and enclosed by parenthesis.
All this tools grant us with the capacity of writing a function that acts as an interpreter for our language, a function that takes as an argument any Lisp expression, and returns its value. This capacity is basically the eval function McCarthy defines on his publication.
Implementing the eval function add more possibilites to Lisp, now we can define any additional function we want (so similar to the Turing machines purpose).
It's not a secret Lisp lacks of some important features, it has no side-effects, no sequential execution, no practical numbers, and dynamic scope. But these limitations can be remedied with little additional code.
The author emphasizes that Lisp is not intrinsically a language for AI or for rapid prototyping. Lisp is what you get when you try to axiomatize computation.
Paul goes way to deep to the Roots of Lisp, for that he analyzes what McCarthy published, he in fact reach the bottom and explain the key points of Lisp, just like the eval function, what's needed, etc. This article clarifies even more about some misunderstandings of Lisp and his capabilities, it's well managed and now it's a good source when starting learning Lisp language.
"In 1960, John McCarthy published a remarkable paper in which he did for programming something like what Euclid did for geometry. He showed how, given a handful of simple operators and a notation for functions, you can build a whole programming language."
He called this language Lisp, for "List Processing".
According to the author, by the time the article was written (2002) there was 2 clean consistent models of programming, the C model and the Lisp model. With the increase of the power on the computers, the new languages being developed have been moving toward the Lisp model. Some of them grab parts of both. The article McCarthy published has a lot to do with this, since the way it dissects Lisp.
One of the key features of Lisp is that it can be written in itself (reviewed later).
To understand how John sees Lisp we have to know there are Seven Primitive Operators in Lisp, here they are :
Expression, which is either an atom (a sequence of letters) or a list of zero or more expressions, separated by whitespace and enclosed by parenthesis.
- (quote x) returns x.
- (atom 'a) returns the atom t if the value is an atom or the empty list, otherwise ().
- (eq x y) returns t if the values of x and y are the same atom or both the empty list, () otherwise.
- (car x) expects the value of x to be a list, and returns its first element.
- (cdr x) expects the value of x to be a list, and returns everything after the first element.
- (cons x y) expects the value of y to be a list, and returns a list containing the value of x followed by the elements of the value of y (like concatenate).
- (cond (p1 e1) ... (pn en)) The p expressions are evaluated in order until one returns i.
All this tools grant us with the capacity of writing a function that acts as an interpreter for our language, a function that takes as an argument any Lisp expression, and returns its value. This capacity is basically the eval function McCarthy defines on his publication.
Implementing the eval function add more possibilites to Lisp, now we can define any additional function we want (so similar to the Turing machines purpose).
It's not a secret Lisp lacks of some important features, it has no side-effects, no sequential execution, no practical numbers, and dynamic scope. But these limitations can be remedied with little additional code.
The author emphasizes that Lisp is not intrinsically a language for AI or for rapid prototyping. Lisp is what you get when you try to axiomatize computation.
Paul goes way to deep to the Roots of Lisp, for that he analyzes what McCarthy published, he in fact reach the bottom and explain the key points of Lisp, just like the eval function, what's needed, etc. This article clarifies even more about some misunderstandings of Lisp and his capabilities, it's well managed and now it's a good source when starting learning Lisp language.
martes, 12 de febrero de 2019
Ricky Hickey on Clojure Entry
Rick Hickey on Clojure
Rich Hickey the master mind behind Clojure introduces his creation which is a language based on Lisp, with a strong focus on concurrency and dynamic programming language for JVM.
Lisp is unique, it presents programs as data structures making "easier" to work with list, vectors, maps, etc. Another advantage of using Lisp is when using macros it's simpler than in other languages.
But why Rick has to create his own version of LISP?
Lisp seems promising but at the bottom, lacked of a robust platform, not to mention that it was tied to be a language for IA, so a "stucked" IA field made LISP stuck too.
But what about the innovations of Clojure:
Having programs represented as data structures means to have the potential to made programs to write programs, meaning meta programing will be simpler.
Some instructions are better encapsulated, Clojure manage that pretty well.
Clojure retained the power of Lisp but it's not just implemented, also it has the ability to "interact" with java problems adding dynamism instantly putting it out of the "island" Lisp were.
Also Clojure has simpler syntax.
What are the main differences between Lisp and Clojure?
Lisp has mutable structures which Clojure lacks (it has immutable structures, immutable data means don't make copies for instances, you only manipulate the path itself using the same "value", this avoid problems).
Clojure is modern with an API built upon abstractions behind all the date structures (meaning they are not just concrete structures)
Clojure provides some references types:
The Atom, which is synchronized and atomic (one at the time).
The Agent which is a synchronized and atomic.
The ref which changes inside a transaction, is synchronized and coordinated (it uses transaction memory).
Why a new language and not just add these functions on Lisp?
Because of the lack of idiomatic support, when programming with values it could be painful to use other Language than Clojure where it is "natural".
This podcast was made mainly to clarify doubts of the language program, giving a little bit of context, key attributes of the language, etc.
Rich Hickey the master mind behind Clojure introduces his creation which is a language based on Lisp, with a strong focus on concurrency and dynamic programming language for JVM.
Lisp is unique, it presents programs as data structures making "easier" to work with list, vectors, maps, etc. Another advantage of using Lisp is when using macros it's simpler than in other languages.
But why Rick has to create his own version of LISP?
Lisp seems promising but at the bottom, lacked of a robust platform, not to mention that it was tied to be a language for IA, so a "stucked" IA field made LISP stuck too.
But what about the innovations of Clojure:
Having programs represented as data structures means to have the potential to made programs to write programs, meaning meta programing will be simpler.
Some instructions are better encapsulated, Clojure manage that pretty well.
Clojure retained the power of Lisp but it's not just implemented, also it has the ability to "interact" with java problems adding dynamism instantly putting it out of the "island" Lisp were.
Also Clojure has simpler syntax.
What are the main differences between Lisp and Clojure?
Lisp has mutable structures which Clojure lacks (it has immutable structures, immutable data means don't make copies for instances, you only manipulate the path itself using the same "value", this avoid problems).
Clojure is modern with an API built upon abstractions behind all the date structures (meaning they are not just concrete structures)
Clojure provides some references types:
The Atom, which is synchronized and atomic (one at the time).
The Agent which is a synchronized and atomic.
The ref which changes inside a transaction, is synchronized and coordinated (it uses transaction memory).
Why a new language and not just add these functions on Lisp?
Because of the lack of idiomatic support, when programming with values it could be painful to use other Language than Clojure where it is "natural".
This podcast was made mainly to clarify doubts of the language program, giving a little bit of context, key attributes of the language, etc.
miércoles, 6 de febrero de 2019
Dick Gabriel on Lisp Entry
Dick Gabriel on Lisp
Lisp is a functional programming language, that uses a lot of nesting functions, one of the main advantages is that programs and data are basically the same which makes programming easier.
So for practical purposes we will introduce Dick Gabriel a programmer with a long carrier and an interesting background, he's also a researcher which happens to be a Lisp "expert".
Gabriel does a curious analysis about Lisp, it's funny that a functional programming language happens to have a non functional-core, the imperative side of Lisp which has a set of instructions to offer, just like sequence. On Lisp everything is evaluated that's one of the main reasons the first thing you learn will be to use quote on lists.
According to Gabriel the grand goal of Lisp was to serve as a support of AI community maybe because of it's nature of smart evaluation and the big scope it has. Lisp also introduces an interesting concept the Meta circular interpreter which basically refers to a program that "it's interpreted itself", imagine the odds.
As other languages Lisp also has Macros which is an operation that produces the expression you want, this makes the language so flexible. Other features are: continuation (a way of resuming in functions) and hygienic macros.
Well let's talk about Lisp on the main areas of use: Lisp is used as a research programming language, also for some systems just like the reservation system Gabriel mention.
In the end what happened with Lisp? AI winter happens, there was so much hype for AI and Lisp leading the "implementation" that in the end there was no business result and of course they blame Lisp, when the problem was the angle they were viewing it.
So for today Lisp is a great functional programming languages, basically being one of the tools for researching.
Lisp is a functional programming language, that uses a lot of nesting functions, one of the main advantages is that programs and data are basically the same which makes programming easier.
So for practical purposes we will introduce Dick Gabriel a programmer with a long carrier and an interesting background, he's also a researcher which happens to be a Lisp "expert".
Gabriel does a curious analysis about Lisp, it's funny that a functional programming language happens to have a non functional-core, the imperative side of Lisp which has a set of instructions to offer, just like sequence. On Lisp everything is evaluated that's one of the main reasons the first thing you learn will be to use quote on lists.
According to Gabriel the grand goal of Lisp was to serve as a support of AI community maybe because of it's nature of smart evaluation and the big scope it has. Lisp also introduces an interesting concept the Meta circular interpreter which basically refers to a program that "it's interpreted itself", imagine the odds.
As other languages Lisp also has Macros which is an operation that produces the expression you want, this makes the language so flexible. Other features are: continuation (a way of resuming in functions) and hygienic macros.
Well let's talk about Lisp on the main areas of use: Lisp is used as a research programming language, also for some systems just like the reservation system Gabriel mention.
In the end what happened with Lisp? AI winter happens, there was so much hype for AI and Lisp leading the "implementation" that in the end there was no business result and of course they blame Lisp, when the problem was the angle they were viewing it.
So for today Lisp is a great functional programming languages, basically being one of the tools for researching.
miércoles, 30 de enero de 2019
The Promises of Fuctional Programming-Entry
The Promises of Fuctional Programming
The way of develop software has evolved along with the hardware that substains it. As the article mentions "We moved from machine code to assembly languages and then problem-oriented programming, which have evolved to integrate techniques such as structural and object-oriented programming." We went from monolithic programs via separately compilable modules and libraries to software component technologies.
Following the line of finding new alternatives a different approach was developed as a mathematical theory in the 1930's (Alonzo Church’s λ-calculus) and as a programming technique in the 1950s (John McCarthy's Lisp language).
So on one hand we have the functional programming which was harder for the hardware to compile and requieres a lot of learning and unlearning, and on the other hand the imperative programming (the traditional one) easier to compile into efficient machine code. The functional programming was designed for another type of operations like calculus whereas the imperative programming could be more "general".
On recent years there has been an increase on people interested in functional programming because of the advantages that this paradigm provides for concurrent and parallel programming. It is also suggested that functional programs are more robust and easier to test than imperative ones.
The most recent Lisp dialect is Clojure that sets itself apart by three features: it has four highly optimized data structures (lists, vectors, maps, and sets) designed for pure functional programming, it offers extensive support for concurrency, and it was designed for the Java Virtual Machine with the goal of easy interoperability with other JVM languages, including Java itself.
Functional programming appears as the answer to some of the operations academics couldn't accomplish with the imperative programming. In recent years more a more people have chosen functional programming because of the way this paradigm manages concurrent and parallel programming among other advantages. In the end it's always good to have diversity even in programming paradigms.
The way of develop software has evolved along with the hardware that substains it. As the article mentions "We moved from machine code to assembly languages and then problem-oriented programming, which have evolved to integrate techniques such as structural and object-oriented programming." We went from monolithic programs via separately compilable modules and libraries to software component technologies.
Following the line of finding new alternatives a different approach was developed as a mathematical theory in the 1930's (Alonzo Church’s λ-calculus) and as a programming technique in the 1950s (John McCarthy's Lisp language).
So on one hand we have the functional programming which was harder for the hardware to compile and requieres a lot of learning and unlearning, and on the other hand the imperative programming (the traditional one) easier to compile into efficient machine code. The functional programming was designed for another type of operations like calculus whereas the imperative programming could be more "general".
On recent years there has been an increase on people interested in functional programming because of the advantages that this paradigm provides for concurrent and parallel programming. It is also suggested that functional programs are more robust and easier to test than imperative ones.
The most recent Lisp dialect is Clojure that sets itself apart by three features: it has four highly optimized data structures (lists, vectors, maps, and sets) designed for pure functional programming, it offers extensive support for concurrency, and it was designed for the Java Virtual Machine with the goal of easy interoperability with other JVM languages, including Java itself.
Functional programming appears as the answer to some of the operations academics couldn't accomplish with the imperative programming. In recent years more a more people have chosen functional programming because of the way this paradigm manages concurrent and parallel programming among other advantages. In the end it's always good to have diversity even in programming paradigms.
Suscribirse a:
Entradas (Atom)