Knowing the Unknowable: Benefits and Drawbacks to Agent-Based and Equation-Based Modeling

Array
(
    [comments] => Array
        (
        )

    [links] => Array
        (
            [#theme] => links__node
            [#pre_render] => Array
                (
                    [0] => drupal_pre_render_links
                )

            [#attributes] => Array
                (
                    [class] => Array
                        (
                            [0] => links
                            [1] => inline
                        )

                )

            [node] => Array
                (
                    [#theme] => links__node__node
                    [#links] => Array
                        (
                        )

                    [#attributes] => Array
                        (
                            [class] => Array
                                (
                                    [0] => links
                                    [1] => inline
                                )

                        )

                )

            [comment] => Array
                (
                    [#theme] => links__node__comment
                    [#links] => Array
                        (
                        )

                    [#attributes] => Array
                        (
                            [class] => Array
                                (
                                    [0] => links
                                    [1] => inline
                                )

                        )

                )

        )

    [field_blog_author] => Array
        (
            [#theme] => field
            [#weight] => 0
            [#title] => Blog Author
            [#access] => 1
            [#label_display] => hidden
            [#view_mode] => full
            [#language] => und
            [#field_name] => field_blog_author
            [#field_type] => text
            [#field_translatable] => 0
            [#entity_type] => node
            [#bundle] => wp_blog
            [#object] => stdClass Object
                (
                    [vid] => 4696
                    [uid] => 118
                    [title] => Knowing the Unknowable: Benefits and Drawbacks to Agent-Based and Equation-Based Modeling
                    [log] => 
                    [status] => 1
                    [comment] => 1
                    [promote] => 0
                    [sticky] => 0
                    [nid] => 3662
                    [type] => wp_blog
                    [language] => und
                    [created] => 1497554965
                    [changed] => 1497559048
                    [tnid] => 0
                    [translate] => 0
                    [revision_timestamp] => 1497559048
                    [revision_uid] => 118
                    [body] => Array
                        (
                            [und] => Array
                                (
                                    [0] => Array
                                        (
                                            [value] => 

Edward Lorenz’s famous hypothetical scenario of a butterfly flapping its wings and somehow, weeks later, causing a tornado hundreds of miles away, illustrates the chaotic and uncertain future outcomes of seemingly inconsequential behaviors taken today. This complexity, and the resulting uncertainty of predicting the future, is vital to our understanding of what actions must be taken today to minimize the future negative effects of climate change.

Traditional Modeling

Fortunately, our ability to make informed guesses about what the future will hold has significantly improved in recent years thanks to advancements made in computing and data collection. There are several classifications of software models which have become widely used to predict outcomes within a complex system (note: this list is not exhaustive).

  1. Macro-econometric models simulate the impact of an economic or cross-sectoral policy decision on the economy. These models use several simultaneously run equations and have recently been used to model the economic impacts of energy and climate policies.
  2. System dynamics models use non-linear equations to quantify a future output from a complex system. They have proved very useful in projecting the ecological and geophysical outcomes of a 2-degree global temperature change.
  3. Bayesian network models use probability theory to infer likely effects from an initial input state. This makes these models very apt at providing early warnings, but only in situations where there is adequate precedent (e.g. medical diagnosis or an automated help system).

Each of these models fills an important role, but developing any of them requires a refined understanding of the processes that will influence the system throughout the temporal window being studied. Failure to fully understand or to acknowledge intricacies of a system or pathway can have enormous ramifications for the output metric, as Nicolas Stern argues is the case with existing attempts to set a social cost of carbon.

Agent Base Modeling

The processes of some systems are not easily inferred from existing data. This is often the case when the future direction of a system is heavily influenced by the autonomous decisions of individuals. For example, to calculate how best to relieve traffic congestion, it may not be enough to know the width of the road, the timing of traffic lights, and the density of cars. Individuals may be in a rush, get frustrated, and behave in irrational or reckless ways that increase congestion. An equation or probability based model may fail to account for abnormal behavior, and therefore would be ineffective at predicting problems.

The only way to accurately model this system is to try to integrate each autonomous agent’s behavior based on a set of individualized goals and priorities. This method of modeling complex systems is called Agent Based Modeling – or ABM, which is an alternative to more traditional equation-based models previously described. In ABM, emergent patterns often reveal themselves in ways that could not have been represented by equation based models.  

ABM to assess emission reduction in Philadelphia

A team of Penn researchers, partly supported by funding from the Kleinman Center, have begun to develop a complex agent based model to be used as a tool to assess policies for achieving carbon emission reduction goals in the Greater Philadelphia region by 2050. The model, which is based on the Theory of Planned Behavior, uses census and survey data, energy reports, and other publically accessible data for the Philadelphia region to design intention driven agents which are, when taken as a group, representative of the residents of Philadelphia. By assigning agents different attributes, such as political party affiliation and awareness of their transportation choice’s impact on the climate and environment, this ABM seeks to accurately predict the behavior of Philadelphia’s population under different environmental and societal conditions; specifically, the population’s vehicle and transportation mode choice under varying gas prices and societal pressure. As the development of this model progresses, the researchers expect that it will demonstrate the circumstances and policies under which an 80% reduction in emissions by 2050 is possible.

The decisions made about agent attributes in an ABM have enormous consequences on the resulting outputs. For this reason, agent models should be kept as simple as possible, should try to ensure that agent attributes are firmly grounded in data, and should constantly be updated as new data better informs researchers of the behaviors being modeled.

To learn more about Nasrin Kahnsari’s, John B.Waldt’s, Barry Silverman’s, William W. Braham’s, Karen Shen’s, and Jae Min Lee’s research, please see their working paper on the subject

[summary] => [format] => full_html [safe_value] =>

Edward Lorenz’s famous hypothetical scenario of a butterfly flapping its wings and somehow, weeks later, causing a tornado hundreds of miles away, illustrates the chaotic and uncertain future outcomes of seemingly inconsequential behaviors taken today. This complexity, and the resulting uncertainty of predicting the future, is vital to our understanding of what actions must be taken today to minimize the future negative effects of climate change.

Traditional Modeling

Fortunately, our ability to make informed guesses about what the future will hold has significantly improved in recent years thanks to advancements made in computing and data collection. There are several classifications of software models which have become widely used to predict outcomes within a complex system (note: this list is not exhaustive).

  1. Macro-econometric models simulate the impact of an economic or cross-sectoral policy decision on the economy. These models use several simultaneously run equations and have recently been used to model the economic impacts of energy and climate policies.
  2. System dynamics models use non-linear equations to quantify a future output from a complex system. They have proved very useful in projecting the ecological and geophysical outcomes of a 2-degree global temperature change.
  3. Bayesian network models use probability theory to infer likely effects from an initial input state. This makes these models very apt at providing early warnings, but only in situations where there is adequate precedent (e.g. medical diagnosis or an automated help system).

Each of these models fills an important role, but developing any of them requires a refined understanding of the processes that will influence the system throughout the temporal window being studied. Failure to fully understand or to acknowledge intricacies of a system or pathway can have enormous ramifications for the output metric, as Nicolas Stern argues is the case with existing attempts to set a social cost of carbon.

Agent Base Modeling

The processes of some systems are not easily inferred from existing data. This is often the case when the future direction of a system is heavily influenced by the autonomous decisions of individuals. For example, to calculate how best to relieve traffic congestion, it may not be enough to know the width of the road, the timing of traffic lights, and the density of cars. Individuals may be in a rush, get frustrated, and behave in irrational or reckless ways that increase congestion. An equation or probability based model may fail to account for abnormal behavior, and therefore would be ineffective at predicting problems.

The only way to accurately model this system is to try to integrate each autonomous agent’s behavior based on a set of individualized goals and priorities. This method of modeling complex systems is called Agent Based Modeling – or ABM, which is an alternative to more traditional equation-based models previously described. In ABM, emergent patterns often reveal themselves in ways that could not have been represented by equation based models.  

ABM to assess emission reduction in Philadelphia

A team of Penn researchers, partly supported by funding from the Kleinman Center, have begun to develop a complex agent based model to be used as a tool to assess policies for achieving carbon emission reduction goals in the Greater Philadelphia region by 2050. The model, which is based on the Theory of Planned Behavior, uses census and survey data, energy reports, and other publically accessible data for the Philadelphia region to design intention driven agents which are, when taken as a group, representative of the residents of Philadelphia. By assigning agents different attributes, such as political party affiliation and awareness of their transportation choice’s impact on the climate and environment, this ABM seeks to accurately predict the behavior of Philadelphia’s population under different environmental and societal conditions; specifically, the population’s vehicle and transportation mode choice under varying gas prices and societal pressure. As the development of this model progresses, the researchers expect that it will demonstrate the circumstances and policies under which an 80% reduction in emissions by 2050 is possible.

The decisions made about agent attributes in an ABM have enormous consequences on the resulting outputs. For this reason, agent models should be kept as simple as possible, should try to ensure that agent attributes are firmly grounded in data, and should constantly be updated as new data better informs researchers of the behaviors being modeled.

To learn more about Nasrin Kahnsari’s, John B.Waldt’s, Barry Silverman’s, William W. Braham’s, Karen Shen’s, and Jae Min Lee’s research, please see their working paper on the subject

[safe_summary] => ) ) ) [taxonomy_wp_blog_tags] => Array ( ) [field_intro_image] => Array ( [und] => Array ( [0] => Array ( [fid] => 1997 [uid] => 118 [filename] => pexels-photo-241544.jpeg [uri] => public://pexels-photo-241544.jpeg [filemime] => image/jpeg [filesize] => 72998 [status] => 1 [timestamp] => 1497558409 [focus_rect] => [crop_rect] => [rdf_mapping] => Array ( ) [alt] => [title] => [width] => 1125 [height] => 750 ) ) ) [field_blog_author] => Array ( [und] => Array ( [0] => Array ( [value] => Oscar Serpell [format] => [safe_value] => Oscar Serpell ) ) ) [field_image_caption] => Array ( ) [field_set_as_featured_] => Array ( [und] => Array ( [0] => Array ( [value] => no ) ) ) [field_authors] => Array ( ) [field_addthis] => Array ( [und] => Array ( [0] => Array ( [value] => Dummy value ) ) ) [rdf_mapping] => Array ( [rdftype] => Array ( [0] => sioc:Item [1] => foaf:Document ) [title] => Array ( [predicates] => Array ( [0] => dc:title ) ) [created] => Array ( [predicates] => Array ( [0] => dc:date [1] => dc:created ) [datatype] => xsd:dateTime [callback] => date_iso8601 ) [changed] => Array ( [predicates] => Array ( [0] => dc:modified ) [datatype] => xsd:dateTime [callback] => date_iso8601 ) [body] => Array ( [predicates] => Array ( [0] => content:encoded ) ) [uid] => Array ( [predicates] => Array ( [0] => sioc:has_creator ) [type] => rel ) [name] => Array ( [predicates] => Array ( [0] => foaf:name ) ) [comment_count] => Array ( [predicates] => Array ( [0] => sioc:num_replies ) [datatype] => xsd:integer ) [last_activity] => Array ( [predicates] => Array ( [0] => sioc:last_activity_date ) [datatype] => xsd:dateTime [callback] => date_iso8601 ) ) [path] => Array ( [pathauto] => 1 ) [cid] => 0 [last_comment_timestamp] => 1497554965 [last_comment_name] => [last_comment_uid] => 118 [comment_count] => 0 [name] => mollie [picture] => 0 [data] => b:0; [entity_view_prepared] => 1 ) [#items] => Array ( [0] => Array ( [value] => Oscar Serpell [format] => [safe_value] => Oscar Serpell ) ) [#formatter] => text_default [0] => Array ( [#markup] => Oscar Serpell ) ) [field_intro_image] => Array ( [#theme] => field [#weight] => 1 [#title] => Intro Image [#access] => 1 [#label_display] => hidden [#view_mode] => full [#language] => und [#field_name] => field_intro_image [#field_type] => image [#field_translatable] => 0 [#entity_type] => node [#bundle] => wp_blog [#object] => stdClass Object ( [vid] => 4696 [uid] => 118 [title] => Knowing the Unknowable: Benefits and Drawbacks to Agent-Based and Equation-Based Modeling [log] => [status] => 1 [comment] => 1 [promote] => 0 [sticky] => 0 [nid] => 3662 [type] => wp_blog [language] => und [created] => 1497554965 [changed] => 1497559048 [tnid] => 0 [translate] => 0 [revision_timestamp] => 1497559048 [revision_uid] => 118 [body] => Array ( [und] => Array ( [0] => Array ( [value] =>

Edward Lorenz’s famous hypothetical scenario of a butterfly flapping its wings and somehow, weeks later, causing a tornado hundreds of miles away, illustrates the chaotic and uncertain future outcomes of seemingly inconsequential behaviors taken today. This complexity, and the resulting uncertainty of predicting the future, is vital to our understanding of what actions must be taken today to minimize the future negative effects of climate change.

Traditional Modeling

Fortunately, our ability to make informed guesses about what the future will hold has significantly improved in recent years thanks to advancements made in computing and data collection. There are several classifications of software models which have become widely used to predict outcomes within a complex system (note: this list is not exhaustive).

  1. Macro-econometric models simulate the impact of an economic or cross-sectoral policy decision on the economy. These models use several simultaneously run equations and have recently been used to model the economic impacts of energy and climate policies.
  2. System dynamics models use non-linear equations to quantify a future output from a complex system. They have proved very useful in projecting the ecological and geophysical outcomes of a 2-degree global temperature change.
  3. Bayesian network models use probability theory to infer likely effects from an initial input state. This makes these models very apt at providing early warnings, but only in situations where there is adequate precedent (e.g. medical diagnosis or an automated help system).

Each of these models fills an important role, but developing any of them requires a refined understanding of the processes that will influence the system throughout the temporal window being studied. Failure to fully understand or to acknowledge intricacies of a system or pathway can have enormous ramifications for the output metric, as Nicolas Stern argues is the case with existing attempts to set a social cost of carbon.

Agent Base Modeling

The processes of some systems are not easily inferred from existing data. This is often the case when the future direction of a system is heavily influenced by the autonomous decisions of individuals. For example, to calculate how best to relieve traffic congestion, it may not be enough to know the width of the road, the timing of traffic lights, and the density of cars. Individuals may be in a rush, get frustrated, and behave in irrational or reckless ways that increase congestion. An equation or probability based model may fail to account for abnormal behavior, and therefore would be ineffective at predicting problems.

The only way to accurately model this system is to try to integrate each autonomous agent’s behavior based on a set of individualized goals and priorities. This method of modeling complex systems is called Agent Based Modeling – or ABM, which is an alternative to more traditional equation-based models previously described. In ABM, emergent patterns often reveal themselves in ways that could not have been represented by equation based models.  

ABM to assess emission reduction in Philadelphia

A team of Penn researchers, partly supported by funding from the Kleinman Center, have begun to develop a complex agent based model to be used as a tool to assess policies for achieving carbon emission reduction goals in the Greater Philadelphia region by 2050. The model, which is based on the Theory of Planned Behavior, uses census and survey data, energy reports, and other publically accessible data for the Philadelphia region to design intention driven agents which are, when taken as a group, representative of the residents of Philadelphia. By assigning agents different attributes, such as political party affiliation and awareness of their transportation choice’s impact on the climate and environment, this ABM seeks to accurately predict the behavior of Philadelphia’s population under different environmental and societal conditions; specifically, the population’s vehicle and transportation mode choice under varying gas prices and societal pressure. As the development of this model progresses, the researchers expect that it will demonstrate the circumstances and policies under which an 80% reduction in emissions by 2050 is possible.

The decisions made about agent attributes in an ABM have enormous consequences on the resulting outputs. For this reason, agent models should be kept as simple as possible, should try to ensure that agent attributes are firmly grounded in data, and should constantly be updated as new data better informs researchers of the behaviors being modeled.

To learn more about Nasrin Kahnsari’s, John B.Waldt’s, Barry Silverman’s, William W. Braham’s, Karen Shen’s, and Jae Min Lee’s research, please see their working paper on the subject

[summary] => [format] => full_html [safe_value] =>

Edward Lorenz’s famous hypothetical scenario of a butterfly flapping its wings and somehow, weeks later, causing a tornado hundreds of miles away, illustrates the chaotic and uncertain future outcomes of seemingly inconsequential behaviors taken today. This complexity, and the resulting uncertainty of predicting the future, is vital to our understanding of what actions must be taken today to minimize the future negative effects of climate change.

Traditional Modeling

Fortunately, our ability to make informed guesses about what the future will hold has significantly improved in recent years thanks to advancements made in computing and data collection. There are several classifications of software models which have become widely used to predict outcomes within a complex system (note: this list is not exhaustive).

  1. Macro-econometric models simulate the impact of an economic or cross-sectoral policy decision on the economy. These models use several simultaneously run equations and have recently been used to model the economic impacts of energy and climate policies.
  2. System dynamics models use non-linear equations to quantify a future output from a complex system. They have proved very useful in projecting the ecological and geophysical outcomes of a 2-degree global temperature change.
  3. Bayesian network models use probability theory to infer likely effects from an initial input state. This makes these models very apt at providing early warnings, but only in situations where there is adequate precedent (e.g. medical diagnosis or an automated help system).

Each of these models fills an important role, but developing any of them requires a refined understanding of the processes that will influence the system throughout the temporal window being studied. Failure to fully understand or to acknowledge intricacies of a system or pathway can have enormous ramifications for the output metric, as Nicolas Stern argues is the case with existing attempts to set a social cost of carbon.

Agent Base Modeling

The processes of some systems are not easily inferred from existing data. This is often the case when the future direction of a system is heavily influenced by the autonomous decisions of individuals. For example, to calculate how best to relieve traffic congestion, it may not be enough to know the width of the road, the timing of traffic lights, and the density of cars. Individuals may be in a rush, get frustrated, and behave in irrational or reckless ways that increase congestion. An equation or probability based model may fail to account for abnormal behavior, and therefore would be ineffective at predicting problems.

The only way to accurately model this system is to try to integrate each autonomous agent’s behavior based on a set of individualized goals and priorities. This method of modeling complex systems is called Agent Based Modeling – or ABM, which is an alternative to more traditional equation-based models previously described. In ABM, emergent patterns often reveal themselves in ways that could not have been represented by equation based models.  

ABM to assess emission reduction in Philadelphia

A team of Penn researchers, partly supported by funding from the Kleinman Center, have begun to develop a complex agent based model to be used as a tool to assess policies for achieving carbon emission reduction goals in the Greater Philadelphia region by 2050. The model, which is based on the Theory of Planned Behavior, uses census and survey data, energy reports, and other publically accessible data for the Philadelphia region to design intention driven agents which are, when taken as a group, representative of the residents of Philadelphia. By assigning agents different attributes, such as political party affiliation and awareness of their transportation choice’s impact on the climate and environment, this ABM seeks to accurately predict the behavior of Philadelphia’s population under different environmental and societal conditions; specifically, the population’s vehicle and transportation mode choice under varying gas prices and societal pressure. As the development of this model progresses, the researchers expect that it will demonstrate the circumstances and policies under which an 80% reduction in emissions by 2050 is possible.

The decisions made about agent attributes in an ABM have enormous consequences on the resulting outputs. For this reason, agent models should be kept as simple as possible, should try to ensure that agent attributes are firmly grounded in data, and should constantly be updated as new data better informs researchers of the behaviors being modeled.

To learn more about Nasrin Kahnsari’s, John B.Waldt’s, Barry Silverman’s, William W. Braham’s, Karen Shen’s, and Jae Min Lee’s research, please see their working paper on the subject

[safe_summary] => ) ) ) [taxonomy_wp_blog_tags] => Array ( ) [field_intro_image] => Array ( [und] => Array ( [0] => Array ( [fid] => 1997 [uid] => 118 [filename] => pexels-photo-241544.jpeg [uri] => public://pexels-photo-241544.jpeg [filemime] => image/jpeg [filesize] => 72998 [status] => 1 [timestamp] => 1497558409 [focus_rect] => [crop_rect] => [rdf_mapping] => Array ( ) [alt] => [title] => [width] => 1125 [height] => 750 ) ) ) [field_blog_author] => Array ( [und] => Array ( [0] => Array ( [value] => Oscar Serpell [format] => [safe_value] => Oscar Serpell ) ) ) [field_image_caption] => Array ( ) [field_set_as_featured_] => Array ( [und] => Array ( [0] => Array ( [value] => no ) ) ) [field_authors] => Array ( ) [field_addthis] => Array ( [und] => Array ( [0] => Array ( [value] => Dummy value ) ) ) [rdf_mapping] => Array ( [rdftype] => Array ( [0] => sioc:Item [1] => foaf:Document ) [title] => Array ( [predicates] => Array ( [0] => dc:title ) ) [created] => Array ( [predicates] => Array ( [0] => dc:date [1] => dc:created ) [datatype] => xsd:dateTime [callback] => date_iso8601 ) [changed] => Array ( [predicates] => Array ( [0] => dc:modified ) [datatype] => xsd:dateTime [callback] => date_iso8601 ) [body] => Array ( [predicates] => Array ( [0] => content:encoded ) ) [uid] => Array ( [predicates] => Array ( [0] => sioc:has_creator ) [type] => rel ) [name] => Array ( [predicates] => Array ( [0] => foaf:name ) ) [comment_count] => Array ( [predicates] => Array ( [0] => sioc:num_replies ) [datatype] => xsd:integer ) [last_activity] => Array ( [predicates] => Array ( [0] => sioc:last_activity_date ) [datatype] => xsd:dateTime [callback] => date_iso8601 ) ) [path] => Array ( [pathauto] => 1 ) [cid] => 0 [last_comment_timestamp] => 1497554965 [last_comment_name] => [last_comment_uid] => 118 [comment_count] => 0 [name] => mollie [picture] => 0 [data] => b:0; [entity_view_prepared] => 1 ) [#items] => Array ( [0] => Array ( [fid] => 1997 [uid] => 118 [filename] => pexels-photo-241544.jpeg [uri] => public://pexels-photo-241544.jpeg [filemime] => image/jpeg [filesize] => 72998 [status] => 1 [timestamp] => 1497558409 [focus_rect] => [crop_rect] => [rdf_mapping] => Array ( ) [alt] => [title] => [width] => 1125 [height] => 750 ) ) [#formatter] => image [0] => Array ( [#theme] => image_formatter [#item] => Array ( [fid] => 1997 [uid] => 118 [filename] => pexels-photo-241544.jpeg [uri] => public://pexels-photo-241544.jpeg [filemime] => image/jpeg [filesize] => 72998 [status] => 1 [timestamp] => 1497558409 [focus_rect] => [crop_rect] => [rdf_mapping] => Array ( ) [alt] => [title] => [width] => 1125 [height] => 750 ) [#image_style] => content_width [#path] => ) ) [body] => Array ( [#theme] => field [#weight] => 3 [#title] => Body [#access] => 1 [#label_display] => hidden [#view_mode] => full [#language] => und [#field_name] => body [#field_type] => text_with_summary [#field_translatable] => 0 [#entity_type] => node [#bundle] => wp_blog [#object] => stdClass Object ( [vid] => 4696 [uid] => 118 [title] => Knowing the Unknowable: Benefits and Drawbacks to Agent-Based and Equation-Based Modeling [log] => [status] => 1 [comment] => 1 [promote] => 0 [sticky] => 0 [nid] => 3662 [type] => wp_blog [language] => und [created] => 1497554965 [changed] => 1497559048 [tnid] => 0 [translate] => 0 [revision_timestamp] => 1497559048 [revision_uid] => 118 [body] => Array ( [und] => Array ( [0] => Array ( [value] =>

Edward Lorenz’s famous hypothetical scenario of a butterfly flapping its wings and somehow, weeks later, causing a tornado hundreds of miles away, illustrates the chaotic and uncertain future outcomes of seemingly inconsequential behaviors taken today. This complexity, and the resulting uncertainty of predicting the future, is vital to our understanding of what actions must be taken today to minimize the future negative effects of climate change.

Traditional Modeling

Fortunately, our ability to make informed guesses about what the future will hold has significantly improved in recent years thanks to advancements made in computing and data collection. There are several classifications of software models which have become widely used to predict outcomes within a complex system (note: this list is not exhaustive).

  1. Macro-econometric models simulate the impact of an economic or cross-sectoral policy decision on the economy. These models use several simultaneously run equations and have recently been used to model the economic impacts of energy and climate policies.
  2. System dynamics models use non-linear equations to quantify a future output from a complex system. They have proved very useful in projecting the ecological and geophysical outcomes of a 2-degree global temperature change.
  3. Bayesian network models use probability theory to infer likely effects from an initial input state. This makes these models very apt at providing early warnings, but only in situations where there is adequate precedent (e.g. medical diagnosis or an automated help system).

Each of these models fills an important role, but developing any of them requires a refined understanding of the processes that will influence the system throughout the temporal window being studied. Failure to fully understand or to acknowledge intricacies of a system or pathway can have enormous ramifications for the output metric, as Nicolas Stern argues is the case with existing attempts to set a social cost of carbon.

Agent Base Modeling

The processes of some systems are not easily inferred from existing data. This is often the case when the future direction of a system is heavily influenced by the autonomous decisions of individuals. For example, to calculate how best to relieve traffic congestion, it may not be enough to know the width of the road, the timing of traffic lights, and the density of cars. Individuals may be in a rush, get frustrated, and behave in irrational or reckless ways that increase congestion. An equation or probability based model may fail to account for abnormal behavior, and therefore would be ineffective at predicting problems.

The only way to accurately model this system is to try to integrate each autonomous agent’s behavior based on a set of individualized goals and priorities. This method of modeling complex systems is called Agent Based Modeling – or ABM, which is an alternative to more traditional equation-based models previously described. In ABM, emergent patterns often reveal themselves in ways that could not have been represented by equation based models.  

ABM to assess emission reduction in Philadelphia

A team of Penn researchers, partly supported by funding from the Kleinman Center, have begun to develop a complex agent based model to be used as a tool to assess policies for achieving carbon emission reduction goals in the Greater Philadelphia region by 2050. The model, which is based on the Theory of Planned Behavior, uses census and survey data, energy reports, and other publically accessible data for the Philadelphia region to design intention driven agents which are, when taken as a group, representative of the residents of Philadelphia. By assigning agents different attributes, such as political party affiliation and awareness of their transportation choice’s impact on the climate and environment, this ABM seeks to accurately predict the behavior of Philadelphia’s population under different environmental and societal conditions; specifically, the population’s vehicle and transportation mode choice under varying gas prices and societal pressure. As the development of this model progresses, the researchers expect that it will demonstrate the circumstances and policies under which an 80% reduction in emissions by 2050 is possible.

The decisions made about agent attributes in an ABM have enormous consequences on the resulting outputs. For this reason, agent models should be kept as simple as possible, should try to ensure that agent attributes are firmly grounded in data, and should constantly be updated as new data better informs researchers of the behaviors being modeled.

To learn more about Nasrin Kahnsari’s, John B.Waldt’s, Barry Silverman’s, William W. Braham’s, Karen Shen’s, and Jae Min Lee’s research, please see their working paper on the subject

[summary] => [format] => full_html [safe_value] =>

Edward Lorenz’s famous hypothetical scenario of a butterfly flapping its wings and somehow, weeks later, causing a tornado hundreds of miles away, illustrates the chaotic and uncertain future outcomes of seemingly inconsequential behaviors taken today. This complexity, and the resulting uncertainty of predicting the future, is vital to our understanding of what actions must be taken today to minimize the future negative effects of climate change.

Traditional Modeling

Fortunately, our ability to make informed guesses about what the future will hold has significantly improved in recent years thanks to advancements made in computing and data collection. There are several classifications of software models which have become widely used to predict outcomes within a complex system (note: this list is not exhaustive).

  1. Macro-econometric models simulate the impact of an economic or cross-sectoral policy decision on the economy. These models use several simultaneously run equations and have recently been used to model the economic impacts of energy and climate policies.
  2. System dynamics models use non-linear equations to quantify a future output from a complex system. They have proved very useful in projecting the ecological and geophysical outcomes of a 2-degree global temperature change.
  3. Bayesian network models use probability theory to infer likely effects from an initial input state. This makes these models very apt at providing early warnings, but only in situations where there is adequate precedent (e.g. medical diagnosis or an automated help system).

Each of these models fills an important role, but developing any of them requires a refined understanding of the processes that will influence the system throughout the temporal window being studied. Failure to fully understand or to acknowledge intricacies of a system or pathway can have enormous ramifications for the output metric, as Nicolas Stern argues is the case with existing attempts to set a social cost of carbon.

Agent Base Modeling

The processes of some systems are not easily inferred from existing data. This is often the case when the future direction of a system is heavily influenced by the autonomous decisions of individuals. For example, to calculate how best to relieve traffic congestion, it may not be enough to know the width of the road, the timing of traffic lights, and the density of cars. Individuals may be in a rush, get frustrated, and behave in irrational or reckless ways that increase congestion. An equation or probability based model may fail to account for abnormal behavior, and therefore would be ineffective at predicting problems.

The only way to accurately model this system is to try to integrate each autonomous agent’s behavior based on a set of individualized goals and priorities. This method of modeling complex systems is called Agent Based Modeling – or ABM, which is an alternative to more traditional equation-based models previously described. In ABM, emergent patterns often reveal themselves in ways that could not have been represented by equation based models.  

ABM to assess emission reduction in Philadelphia

A team of Penn researchers, partly supported by funding from the Kleinman Center, have begun to develop a complex agent based model to be used as a tool to assess policies for achieving carbon emission reduction goals in the Greater Philadelphia region by 2050. The model, which is based on the Theory of Planned Behavior, uses census and survey data, energy reports, and other publically accessible data for the Philadelphia region to design intention driven agents which are, when taken as a group, representative of the residents of Philadelphia. By assigning agents different attributes, such as political party affiliation and awareness of their transportation choice’s impact on the climate and environment, this ABM seeks to accurately predict the behavior of Philadelphia’s population under different environmental and societal conditions; specifically, the population’s vehicle and transportation mode choice under varying gas prices and societal pressure. As the development of this model progresses, the researchers expect that it will demonstrate the circumstances and policies under which an 80% reduction in emissions by 2050 is possible.

The decisions made about agent attributes in an ABM have enormous consequences on the resulting outputs. For this reason, agent models should be kept as simple as possible, should try to ensure that agent attributes are firmly grounded in data, and should constantly be updated as new data better informs researchers of the behaviors being modeled.

To learn more about Nasrin Kahnsari’s, John B.Waldt’s, Barry Silverman’s, William W. Braham’s, Karen Shen’s, and Jae Min Lee’s research, please see their working paper on the subject

[safe_summary] => ) ) ) [taxonomy_wp_blog_tags] => Array ( ) [field_intro_image] => Array ( [und] => Array ( [0] => Array ( [fid] => 1997 [uid] => 118 [filename] => pexels-photo-241544.jpeg [uri] => public://pexels-photo-241544.jpeg [filemime] => image/jpeg [filesize] => 72998 [status] => 1 [timestamp] => 1497558409 [focus_rect] => [crop_rect] => [rdf_mapping] => Array ( ) [alt] => [title] => [width] => 1125 [height] => 750 ) ) ) [field_blog_author] => Array ( [und] => Array ( [0] => Array ( [value] => Oscar Serpell [format] => [safe_value] => Oscar Serpell ) ) ) [field_image_caption] => Array ( ) [field_set_as_featured_] => Array ( [und] => Array ( [0] => Array ( [value] => no ) ) ) [field_authors] => Array ( ) [field_addthis] => Array ( [und] => Array ( [0] => Array ( [value] => Dummy value ) ) ) [rdf_mapping] => Array ( [rdftype] => Array ( [0] => sioc:Item [1] => foaf:Document ) [title] => Array ( [predicates] => Array ( [0] => dc:title ) ) [created] => Array ( [predicates] => Array ( [0] => dc:date [1] => dc:created ) [datatype] => xsd:dateTime [callback] => date_iso8601 ) [changed] => Array ( [predicates] => Array ( [0] => dc:modified ) [datatype] => xsd:dateTime [callback] => date_iso8601 ) [body] => Array ( [predicates] => Array ( [0] => content:encoded ) ) [uid] => Array ( [predicates] => Array ( [0] => sioc:has_creator ) [type] => rel ) [name] => Array ( [predicates] => Array ( [0] => foaf:name ) ) [comment_count] => Array ( [predicates] => Array ( [0] => sioc:num_replies ) [datatype] => xsd:integer ) [last_activity] => Array ( [predicates] => Array ( [0] => sioc:last_activity_date ) [datatype] => xsd:dateTime [callback] => date_iso8601 ) ) [path] => Array ( [pathauto] => 1 ) [cid] => 0 [last_comment_timestamp] => 1497554965 [last_comment_name] => [last_comment_uid] => 118 [comment_count] => 0 [name] => mollie [picture] => 0 [data] => b:0; [entity_view_prepared] => 1 ) [#items] => Array ( [0] => Array ( [value] =>

Edward Lorenz’s famous hypothetical scenario of a butterfly flapping its wings and somehow, weeks later, causing a tornado hundreds of miles away, illustrates the chaotic and uncertain future outcomes of seemingly inconsequential behaviors taken today. This complexity, and the resulting uncertainty of predicting the future, is vital to our understanding of what actions must be taken today to minimize the future negative effects of climate change.

Traditional Modeling

Fortunately, our ability to make informed guesses about what the future will hold has significantly improved in recent years thanks to advancements made in computing and data collection. There are several classifications of software models which have become widely used to predict outcomes within a complex system (note: this list is not exhaustive).

  1. Macro-econometric models simulate the impact of an economic or cross-sectoral policy decision on the economy. These models use several simultaneously run equations and have recently been used to model the economic impacts of energy and climate policies.
  2. System dynamics models use non-linear equations to quantify a future output from a complex system. They have proved very useful in projecting the ecological and geophysical outcomes of a 2-degree global temperature change.
  3. Bayesian network models use probability theory to infer likely effects from an initial input state. This makes these models very apt at providing early warnings, but only in situations where there is adequate precedent (e.g. medical diagnosis or an automated help system).

Each of these models fills an important role, but developing any of them requires a refined understanding of the processes that will influence the system throughout the temporal window being studied. Failure to fully understand or to acknowledge intricacies of a system or pathway can have enormous ramifications for the output metric, as Nicolas Stern argues is the case with existing attempts to set a social cost of carbon.

Agent Base Modeling

The processes of some systems are not easily inferred from existing data. This is often the case when the future direction of a system is heavily influenced by the autonomous decisions of individuals. For example, to calculate how best to relieve traffic congestion, it may not be enough to know the width of the road, the timing of traffic lights, and the density of cars. Individuals may be in a rush, get frustrated, and behave in irrational or reckless ways that increase congestion. An equation or probability based model may fail to account for abnormal behavior, and therefore would be ineffective at predicting problems.

The only way to accurately model this system is to try to integrate each autonomous agent’s behavior based on a set of individualized goals and priorities. This method of modeling complex systems is called Agent Based Modeling – or ABM, which is an alternative to more traditional equation-based models previously described. In ABM, emergent patterns often reveal themselves in ways that could not have been represented by equation based models.  

ABM to assess emission reduction in Philadelphia

A team of Penn researchers, partly supported by funding from the Kleinman Center, have begun to develop a complex agent based model to be used as a tool to assess policies for achieving carbon emission reduction goals in the Greater Philadelphia region by 2050. The model, which is based on the Theory of Planned Behavior, uses census and survey data, energy reports, and other publically accessible data for the Philadelphia region to design intention driven agents which are, when taken as a group, representative of the residents of Philadelphia. By assigning agents different attributes, such as political party affiliation and awareness of their transportation choice’s impact on the climate and environment, this ABM seeks to accurately predict the behavior of Philadelphia’s population under different environmental and societal conditions; specifically, the population’s vehicle and transportation mode choice under varying gas prices and societal pressure. As the development of this model progresses, the researchers expect that it will demonstrate the circumstances and policies under which an 80% reduction in emissions by 2050 is possible.

The decisions made about agent attributes in an ABM have enormous consequences on the resulting outputs. For this reason, agent models should be kept as simple as possible, should try to ensure that agent attributes are firmly grounded in data, and should constantly be updated as new data better informs researchers of the behaviors being modeled.

To learn more about Nasrin Kahnsari’s, John B.Waldt’s, Barry Silverman’s, William W. Braham’s, Karen Shen’s, and Jae Min Lee’s research, please see their working paper on the subject

[summary] => [format] => full_html [safe_value] =>

Edward Lorenz’s famous hypothetical scenario of a butterfly flapping its wings and somehow, weeks later, causing a tornado hundreds of miles away, illustrates the chaotic and uncertain future outcomes of seemingly inconsequential behaviors taken today. This complexity, and the resulting uncertainty of predicting the future, is vital to our understanding of what actions must be taken today to minimize the future negative effects of climate change.

Traditional Modeling

Fortunately, our ability to make informed guesses about what the future will hold has significantly improved in recent years thanks to advancements made in computing and data collection. There are several classifications of software models which have become widely used to predict outcomes within a complex system (note: this list is not exhaustive).

  1. Macro-econometric models simulate the impact of an economic or cross-sectoral policy decision on the economy. These models use several simultaneously run equations and have recently been used to model the economic impacts of energy and climate policies.
  2. System dynamics models use non-linear equations to quantify a future output from a complex system. They have proved very useful in projecting the ecological and geophysical outcomes of a 2-degree global temperature change.
  3. Bayesian network models use probability theory to infer likely effects from an initial input state. This makes these models very apt at providing early warnings, but only in situations where there is adequate precedent (e.g. medical diagnosis or an automated help system).

Each of these models fills an important role, but developing any of them requires a refined understanding of the processes that will influence the system throughout the temporal window being studied. Failure to fully understand or to acknowledge intricacies of a system or pathway can have enormous ramifications for the output metric, as Nicolas Stern argues is the case with existing attempts to set a social cost of carbon.

Agent Base Modeling

The processes of some systems are not easily inferred from existing data. This is often the case when the future direction of a system is heavily influenced by the autonomous decisions of individuals. For example, to calculate how best to relieve traffic congestion, it may not be enough to know the width of the road, the timing of traffic lights, and the density of cars. Individuals may be in a rush, get frustrated, and behave in irrational or reckless ways that increase congestion. An equation or probability based model may fail to account for abnormal behavior, and therefore would be ineffective at predicting problems.

The only way to accurately model this system is to try to integrate each autonomous agent’s behavior based on a set of individualized goals and priorities. This method of modeling complex systems is called Agent Based Modeling – or ABM, which is an alternative to more traditional equation-based models previously described. In ABM, emergent patterns often reveal themselves in ways that could not have been represented by equation based models.  

ABM to assess emission reduction in Philadelphia

A team of Penn researchers, partly supported by funding from the Kleinman Center, have begun to develop a complex agent based model to be used as a tool to assess policies for achieving carbon emission reduction goals in the Greater Philadelphia region by 2050. The model, which is based on the Theory of Planned Behavior, uses census and survey data, energy reports, and other publically accessible data for the Philadelphia region to design intention driven agents which are, when taken as a group, representative of the residents of Philadelphia. By assigning agents different attributes, such as political party affiliation and awareness of their transportation choice’s impact on the climate and environment, this ABM seeks to accurately predict the behavior of Philadelphia’s population under different environmental and societal conditions; specifically, the population’s vehicle and transportation mode choice under varying gas prices and societal pressure. As the development of this model progresses, the researchers expect that it will demonstrate the circumstances and policies under which an 80% reduction in emissions by 2050 is possible.

The decisions made about agent attributes in an ABM have enormous consequences on the resulting outputs. For this reason, agent models should be kept as simple as possible, should try to ensure that agent attributes are firmly grounded in data, and should constantly be updated as new data better informs researchers of the behaviors being modeled.

To learn more about Nasrin Kahnsari’s, John B.Waldt’s, Barry Silverman’s, William W. Braham’s, Karen Shen’s, and Jae Min Lee’s research, please see their working paper on the subject

[safe_summary] => ) ) [#formatter] => text_default [0] => Array ( [#markup] =>

Edward Lorenz’s famous hypothetical scenario of a butterfly flapping its wings and somehow, weeks later, causing a tornado hundreds of miles away, illustrates the chaotic and uncertain future outcomes of seemingly inconsequential behaviors taken today. This complexity, and the resulting uncertainty of predicting the future, is vital to our understanding of what actions must be taken today to minimize the future negative effects of climate change.

Traditional Modeling

Fortunately, our ability to make informed guesses about what the future will hold has significantly improved in recent years thanks to advancements made in computing and data collection. There are several classifications of software models which have become widely used to predict outcomes within a complex system (note: this list is not exhaustive).

  1. Macro-econometric models simulate the impact of an economic or cross-sectoral policy decision on the economy. These models use several simultaneously run equations and have recently been used to model the economic impacts of energy and climate policies.
  2. System dynamics models use non-linear equations to quantify a future output from a complex system. They have proved very useful in projecting the ecological and geophysical outcomes of a 2-degree global temperature change.
  3. Bayesian network models use probability theory to infer likely effects from an initial input state. This makes these models very apt at providing early warnings, but only in situations where there is adequate precedent (e.g. medical diagnosis or an automated help system).

Each of these models fills an important role, but developing any of them requires a refined understanding of the processes that will influence the system throughout the temporal window being studied. Failure to fully understand or to acknowledge intricacies of a system or pathway can have enormous ramifications for the output metric, as Nicolas Stern argues is the case with existing attempts to set a social cost of carbon.

Agent Base Modeling

The processes of some systems are not easily inferred from existing data. This is often the case when the future direction of a system is heavily influenced by the autonomous decisions of individuals. For example, to calculate how best to relieve traffic congestion, it may not be enough to know the width of the road, the timing of traffic lights, and the density of cars. Individuals may be in a rush, get frustrated, and behave in irrational or reckless ways that increase congestion. An equation or probability based model may fail to account for abnormal behavior, and therefore would be ineffective at predicting problems.

The only way to accurately model this system is to try to integrate each autonomous agent’s behavior based on a set of individualized goals and priorities. This method of modeling complex systems is called Agent Based Modeling – or ABM, which is an alternative to more traditional equation-based models previously described. In ABM, emergent patterns often reveal themselves in ways that could not have been represented by equation based models.  

ABM to assess emission reduction in Philadelphia

A team of Penn researchers, partly supported by funding from the Kleinman Center, have begun to develop a complex agent based model to be used as a tool to assess policies for achieving carbon emission reduction goals in the Greater Philadelphia region by 2050. The model, which is based on the Theory of Planned Behavior, uses census and survey data, energy reports, and other publically accessible data for the Philadelphia region to design intention driven agents which are, when taken as a group, representative of the residents of Philadelphia. By assigning agents different attributes, such as political party affiliation and awareness of their transportation choice’s impact on the climate and environment, this ABM seeks to accurately predict the behavior of Philadelphia’s population under different environmental and societal conditions; specifically, the population’s vehicle and transportation mode choice under varying gas prices and societal pressure. As the development of this model progresses, the researchers expect that it will demonstrate the circumstances and policies under which an 80% reduction in emissions by 2050 is possible.

The decisions made about agent attributes in an ABM have enormous consequences on the resulting outputs. For this reason, agent models should be kept as simple as possible, should try to ensure that agent attributes are firmly grounded in data, and should constantly be updated as new data better informs researchers of the behaviors being modeled.

To learn more about Nasrin Kahnsari’s, John B.Waldt’s, Barry Silverman’s, William W. Braham’s, Karen Shen’s, and Jae Min Lee’s research, please see their working paper on the subject

) ) [submitted_by] => Array ( [0] => Array ( ) [#weight] => 7 [#access] => ) )
Posted by
Oscar Serpell
on June 15, 2017

Edward Lorenz’s famous hypothetical scenario of a butterfly flapping its wings and somehow, weeks later, causing a tornado hundreds of miles away, illustrates the chaotic and uncertain future outcomes of seemingly inconsequential behaviors taken today. This complexity, and the resulting uncertainty of predicting the future, is vital to our understanding of what actions must be taken today to minimize the future negative effects of climate change.

Traditional Modeling

Fortunately, our ability to make informed guesses about what the future will hold has significantly improved in recent years thanks to advancements made in computing and data collection. There are several classifications of software models which have become widely used to predict outcomes within a complex system (note: this list is not exhaustive).

  1. Macro-econometric models simulate the impact of an economic or cross-sectoral policy decision on the economy. These models use several simultaneously run equations and have recently been used to model the economic impacts of energy and climate policies.
  2. System dynamics models use non-linear equations to quantify a future output from a complex system. They have proved very useful in projecting the ecological and geophysical outcomes of a 2-degree global temperature change.
  3. Bayesian network models use probability theory to infer likely effects from an initial input state. This makes these models very apt at providing early warnings, but only in situations where there is adequate precedent (e.g. medical diagnosis or an automated help system).

Each of these models fills an important role, but developing any of them requires a refined understanding of the processes that will influence the system throughout the temporal window being studied. Failure to fully understand or to acknowledge intricacies of a system or pathway can have enormous ramifications for the output metric, as Nicolas Stern argues is the case with existing attempts to set a social cost of carbon.

Agent Base Modeling

The processes of some systems are not easily inferred from existing data. This is often the case when the future direction of a system is heavily influenced by the autonomous decisions of individuals. For example, to calculate how best to relieve traffic congestion, it may not be enough to know the width of the road, the timing of traffic lights, and the density of cars. Individuals may be in a rush, get frustrated, and behave in irrational or reckless ways that increase congestion. An equation or probability based model may fail to account for abnormal behavior, and therefore would be ineffective at predicting problems.

The only way to accurately model this system is to try to integrate each autonomous agent’s behavior based on a set of individualized goals and priorities. This method of modeling complex systems is called Agent Based Modeling – or ABM, which is an alternative to more traditional equation-based models previously described. In ABM, emergent patterns often reveal themselves in ways that could not have been represented by equation based models.  

ABM to assess emission reduction in Philadelphia

A team of Penn researchers, partly supported by funding from the Kleinman Center, have begun to develop a complex agent based model to be used as a tool to assess policies for achieving carbon emission reduction goals in the Greater Philadelphia region by 2050. The model, which is based on the Theory of Planned Behavior, uses census and survey data, energy reports, and other publically accessible data for the Philadelphia region to design intention driven agents which are, when taken as a group, representative of the residents of Philadelphia. By assigning agents different attributes, such as political party affiliation and awareness of their transportation choice’s impact on the climate and environment, this ABM seeks to accurately predict the behavior of Philadelphia’s population under different environmental and societal conditions; specifically, the population’s vehicle and transportation mode choice under varying gas prices and societal pressure. As the development of this model progresses, the researchers expect that it will demonstrate the circumstances and policies under which an 80% reduction in emissions by 2050 is possible.

The decisions made about agent attributes in an ABM have enormous consequences on the resulting outputs. For this reason, agent models should be kept as simple as possible, should try to ensure that agent attributes are firmly grounded in data, and should constantly be updated as new data better informs researchers of the behaviors being modeled.

To learn more about Nasrin Kahnsari’s, John B.Waldt’s, Barry Silverman’s, William W. Braham’s, Karen Shen’s, and Jae Min Lee’s research, please see their working paper on the subject

Our blog highlights the research, opinions, and insights of individual authors. It does not represent the voice of the Kleinman Center as a whole.