(A continuation of thoughts about the Columbia Building Intelligence Project - see my earlier post for more commentary.)
C-BIP Studio has now ended, and we're working on our final exhibition materials (more to come). So now I'd like to look back on how the second half of the semester went. In this post I want to get into more detail about the actual structure and methodology of the studio. As I said before, I think the studio had really interesting goals and an environmental ethic that matched up well with current thought in architecture and planning. The workflow proposed to achieve these goals, however, I found much less convincing, and in fact I think the studio "system" was poorly designed. In the language of our critics, the design of the studio workflow was a "missed opportunity" to achieve some really interesting results.
To briefly summarize the mandated workflow, in the first half of the semester we were asked to design parametric building "elements" that could be used to retrofit a building; these elements were supposed to have some impact on building energy use, but otherwise were freely designed. In the second half of the semester, we formed groups of four and attempted to design a building "strategy" that made use of the elements. The goal was to create a renovation process, using the elements in combination, that could achieve both energy and other goals, and could be applied to multiple buildings in the city. Each group began with a specific building of a certain type, and was supposed to generalize their strategy to apply to other buildings of the same type. We were instructed not to use our own elements in these strategies, and not to alter the elements created by other students, but instead to propose "feature requests" whenever we wanted a change to be made to the elements. The unstated goal was to avoid "Frankenstein" buildings that merely tacked the elements on without integration.
In the first half of the semester, everyone more or less seemed to succeed at developing a parametric element, with a standardized interface provided by the studio instructors. But problems arose as soon as we began exchanging our elements, testing them using a complex local network. Differing abilities in understanding networking, scripting, and interface design meant that elements varied widely in their useability. This issue, unfortunately, never got resolved despite the best efforts of the TAs; even at the semester's end, many elements still did not function as their authors intended.
In the second part of the semester, the lack of functional elements made designing a building strategy around the elements basically impossible, except at a very small, modular scale - which, wisely, was the route many groups chose to follow. Our group tried to implement the elements at a large scale across not just one building but an entire block, and found it nearly impossible, within the studio constraints, to develop an interesting, effective, and working strategy. Many groups avoided the CATIA component of the strategy altogether, or brought it in at the last minute, choosing to focus instead on the conceptual framework of the strategy itself. We then tried to design a parametric massing tool, as urged by the critics, but ran out of time to create something that actually functioned. In that sense, we failed to achieve the studio's goals. Along the way, we developed what I consider to be an interesting building strategy, but this strategy doesn't have anything to do with the elements developed by the other students - it could be carried out without any reference to those elements.
I think the major problem with the studio workflow was a confusion over whether the goal of the studio was to create a building system - the stated intention - or a parametric tool. The studio rhetoric was that we were all using CATIA, an advanced parametric modeling software based on scripting, to design the elements and building strategies; but in reality we were being asked to design tools that assist with building strategies. We were provided with exceptional support through TAs, outside consultants, and our unique studio environment, but nevertheless we are not programmers, and writing tools is hard, so very few of us (perhaps none of us) managed to produce useful, interesting, and functional tools. Most of the tools produced were one of these things - either useful, or interesting, or functional - but few achieved more than one of these.
The proscription against altering the elements of other students, and against using one's own element, compounded the issues above. The idea was to mimic a model of distributed open-source programming, where users request features and owners can decide to accept or reject the changes, but in reality programmers use their own tools (or they wouldn't write them), and users are free to write actual code into the tools (to add functionality), not just to propose ideas for changes. I think the groups that were most successful at using the elements were also the ones that hacked them the most, with or without the consent of their owners.
So does this mean that the C-BIP workflow was a failure? I think it was, in the sense that many of us became incredibly frustrated and few (or none) of us managed to achieve the studio's goals, but I don't think that this meant that the studio as a whole was a failure. I think that C-BIP was an important experiment in studio design, in workflow design, and in collaborative thinking and working, despite this semester's glaring problems. I also think we succeeded in getting really interesting feedback and commentary from the guest critics at our final review - we got them thinking about the right issues.
If C-BIP lives on in future incarnations, I have a few suggestions - but I'll save those for the next post. For now, enjoy some photos of our studio, courtesy of Kim Nguyen!
C-BIP Studio has now ended, and we're working on our final exhibition materials (more to come). So now I'd like to look back on how the second half of the semester went. In this post I want to get into more detail about the actual structure and methodology of the studio. As I said before, I think the studio had really interesting goals and an environmental ethic that matched up well with current thought in architecture and planning. The workflow proposed to achieve these goals, however, I found much less convincing, and in fact I think the studio "system" was poorly designed. In the language of our critics, the design of the studio workflow was a "missed opportunity" to achieve some really interesting results.
To briefly summarize the mandated workflow, in the first half of the semester we were asked to design parametric building "elements" that could be used to retrofit a building; these elements were supposed to have some impact on building energy use, but otherwise were freely designed. In the second half of the semester, we formed groups of four and attempted to design a building "strategy" that made use of the elements. The goal was to create a renovation process, using the elements in combination, that could achieve both energy and other goals, and could be applied to multiple buildings in the city. Each group began with a specific building of a certain type, and was supposed to generalize their strategy to apply to other buildings of the same type. We were instructed not to use our own elements in these strategies, and not to alter the elements created by other students, but instead to propose "feature requests" whenever we wanted a change to be made to the elements. The unstated goal was to avoid "Frankenstein" buildings that merely tacked the elements on without integration.
In the first half of the semester, everyone more or less seemed to succeed at developing a parametric element, with a standardized interface provided by the studio instructors. But problems arose as soon as we began exchanging our elements, testing them using a complex local network. Differing abilities in understanding networking, scripting, and interface design meant that elements varied widely in their useability. This issue, unfortunately, never got resolved despite the best efforts of the TAs; even at the semester's end, many elements still did not function as their authors intended.
In the second part of the semester, the lack of functional elements made designing a building strategy around the elements basically impossible, except at a very small, modular scale - which, wisely, was the route many groups chose to follow. Our group tried to implement the elements at a large scale across not just one building but an entire block, and found it nearly impossible, within the studio constraints, to develop an interesting, effective, and working strategy. Many groups avoided the CATIA component of the strategy altogether, or brought it in at the last minute, choosing to focus instead on the conceptual framework of the strategy itself. We then tried to design a parametric massing tool, as urged by the critics, but ran out of time to create something that actually functioned. In that sense, we failed to achieve the studio's goals. Along the way, we developed what I consider to be an interesting building strategy, but this strategy doesn't have anything to do with the elements developed by the other students - it could be carried out without any reference to those elements.
I think the major problem with the studio workflow was a confusion over whether the goal of the studio was to create a building system - the stated intention - or a parametric tool. The studio rhetoric was that we were all using CATIA, an advanced parametric modeling software based on scripting, to design the elements and building strategies; but in reality we were being asked to design tools that assist with building strategies. We were provided with exceptional support through TAs, outside consultants, and our unique studio environment, but nevertheless we are not programmers, and writing tools is hard, so very few of us (perhaps none of us) managed to produce useful, interesting, and functional tools. Most of the tools produced were one of these things - either useful, or interesting, or functional - but few achieved more than one of these.
The proscription against altering the elements of other students, and against using one's own element, compounded the issues above. The idea was to mimic a model of distributed open-source programming, where users request features and owners can decide to accept or reject the changes, but in reality programmers use their own tools (or they wouldn't write them), and users are free to write actual code into the tools (to add functionality), not just to propose ideas for changes. I think the groups that were most successful at using the elements were also the ones that hacked them the most, with or without the consent of their owners.
So does this mean that the C-BIP workflow was a failure? I think it was, in the sense that many of us became incredibly frustrated and few (or none) of us managed to achieve the studio's goals, but I don't think that this meant that the studio as a whole was a failure. I think that C-BIP was an important experiment in studio design, in workflow design, and in collaborative thinking and working, despite this semester's glaring problems. I also think we succeeded in getting really interesting feedback and commentary from the guest critics at our final review - we got them thinking about the right issues.
If C-BIP lives on in future incarnations, I have a few suggestions - but I'll save those for the next post. For now, enjoy some photos of our studio, courtesy of Kim Nguyen!
Comments
Post a Comment